Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>400 subscribers
>400 subscribers
Share Dialog
Share Dialog
I ended last week’s Tech Tuesday with a call for ideas for a sample project and got a couple of suggestions. Since then though I have come across a great idea for a project. And since I am teaching my Skillshare class today and therefore spending most of the morning hours reviewing my materials, I am keeping this post short to share the idea and its background.
It’s simple: build a brain. OK, so that’s a vast overstatement. Really it will be more like simulating a bunch of neurons. But a lot has happened since I first read about neural networks in Rumelhart and McClelland’s Parallel Distributed Processing (1986). And recent progress seems like there is real movement in the field.
A lot of it seems to come from Canada. There is Geoff Hinton’s work on deep learning at the University of Toronto. And separately Chris Elisamith’s work on a structurally realistic brain with visual recognition, memory and output. Also, there was a recent paper by two Google engineers on learning to recognize faces without an explicit training set.
Building something even at the toy level here will be a fun excuse to read up a bit more on these recent developments.
I ended last week’s Tech Tuesday with a call for ideas for a sample project and got a couple of suggestions. Since then though I have come across a great idea for a project. And since I am teaching my Skillshare class today and therefore spending most of the morning hours reviewing my materials, I am keeping this post short to share the idea and its background.
It’s simple: build a brain. OK, so that’s a vast overstatement. Really it will be more like simulating a bunch of neurons. But a lot has happened since I first read about neural networks in Rumelhart and McClelland’s Parallel Distributed Processing (1986). And recent progress seems like there is real movement in the field.
A lot of it seems to come from Canada. There is Geoff Hinton’s work on deep learning at the University of Toronto. And separately Chris Elisamith’s work on a structurally realistic brain with visual recognition, memory and output. Also, there was a recent paper by two Google engineers on learning to recognize faces without an explicit training set.
Building something even at the toy level here will be a fun excuse to read up a bit more on these recent developments.
No comments yet