Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>400 subscribers
>400 subscribers
Share Dialog
Share Dialog
Today the team at Clarifai is releasing an amazing second version of their API that makes two breakthroughs available to every developer: custom training and fast visual search.
The beauty of Clarifai’s API when it first launched was that you could be up and running in minutes and get great results with their models. But what if you wanted to train your own concepts? Well, if you were a bigger customer willing to pay, Clarifai would train a custom model for you. With the newly released version of the API you can do it yourself in just a few lines of code.
In fact it is so easy that I built a custom classifier this morning before coffee. Literally before coffee because I was so excited that I didn’t need caffeine. I wanted to see if I could get a model to distinguish between Optimists and Lasers, two popular types of dinghies. I went over to Flickr and typed “sail optimist” and did a bit of scrolling to fill the page. I then saved the html and used grep to extract all the image preview links and wrote them to a file called “opti.” I then did the same for lasers – I wound up with a couple hundred images each.
Here is the entire code needed to then use that to train a model that can distinguish the two:
Yes. That’s it. The entire code. And then to test, I wrote another short piece that let’s me try an image from the command line
Here are the results on two images I picked off Google Image search to test the model

Model predictions: Opti [0.8209581] Laser [0.17904194]
versus

Model predictions: Laser [0.7931673] Opti [0.20683272]
Now I am sure that my model needs more working and more training to work well across a wide range of images (I did test a bunch more and got generally good results)
And yes, I made sure these images were not in the training set. Clarifai makes that easy by providing a beautiful visual console that let’s you check all the images you used to train your model. In the console you can do training just by selecting images (if you don’t want to write any code) and also conduct visual and concept search.
If you are as excited about this as I am, you should go play with it, although I do suspect their servers will be a bit slammed today as Matt is unveiling this at O’Reilly’s AI conference. You can learn more on the Clarifai blog or dive into the documentation.
Today the team at Clarifai is releasing an amazing second version of their API that makes two breakthroughs available to every developer: custom training and fast visual search.
The beauty of Clarifai’s API when it first launched was that you could be up and running in minutes and get great results with their models. But what if you wanted to train your own concepts? Well, if you were a bigger customer willing to pay, Clarifai would train a custom model for you. With the newly released version of the API you can do it yourself in just a few lines of code.
In fact it is so easy that I built a custom classifier this morning before coffee. Literally before coffee because I was so excited that I didn’t need caffeine. I wanted to see if I could get a model to distinguish between Optimists and Lasers, two popular types of dinghies. I went over to Flickr and typed “sail optimist” and did a bit of scrolling to fill the page. I then saved the html and used grep to extract all the image preview links and wrote them to a file called “opti.” I then did the same for lasers – I wound up with a couple hundred images each.
Here is the entire code needed to then use that to train a model that can distinguish the two:
Yes. That’s it. The entire code. And then to test, I wrote another short piece that let’s me try an image from the command line
Here are the results on two images I picked off Google Image search to test the model

Model predictions: Opti [0.8209581] Laser [0.17904194]
versus

Model predictions: Laser [0.7931673] Opti [0.20683272]
Now I am sure that my model needs more working and more training to work well across a wide range of images (I did test a bunch more and got generally good results)
And yes, I made sure these images were not in the training set. Clarifai makes that easy by providing a beautiful visual console that let’s you check all the images you used to train your model. In the console you can do training just by selecting images (if you don’t want to write any code) and also conduct visual and concept search.
If you are as excited about this as I am, you should go play with it, although I do suspect their servers will be a bit slammed today as Matt is unveiling this at O’Reilly’s AI conference. You can learn more on the Clarifai blog or dive into the documentation.
No comments yet