Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
Philosophy Mondays: Human-AI Collaboration
Today's Philosophy Monday is an important interlude. I want to reveal that I have not been writing the posts in this series entirely by myself. Instead I have been working with Claude, not just for the graphic illustrations, but also for the text. My method has been to write a rough draft and then ask Claude for improvement suggestions. I will expand this collaboration to other intelligences going forward, including open source models such as Llama and DeepSeek. I will also explore other moda...

Intent-based Collaboration Environments
AI Native IDEs for Code, Engineering, Science
Web3/Crypto: Why Bother?
One thing that keeps surprising me is how quite a few people see absolutely nothing redeeming in web3 (née crypto). Maybe this is their genuine belief. Maybe it is a reaction to the extreme boosterism of some proponents who present web3 as bringing about a libertarian nirvana. From early on I have tried to provide a more rounded perspective, pointing to both the good and the bad that can come from it as in my talks at the Blockstack Summits. Today, however, I want to attempt to provide a coge...
>400 subscribers
>400 subscribers
Share Dialog
Share Dialog
One of my favorite Kaizen techniques is visualization. On the shopfloor this takes the form of large signs that graphically display key quality metrics. The charts show overall trendlines but also break out individual teams. This is a powerful motivator. When there are large gains in quality, the credit can go to the team(s) that produced the progress. Conversely, when the overall chart shows a dip or a slow down in improvement, it is often readily apparent which team is responsible. Some people may find the idea of this level of transparency uncomfortable, but much depends on how successes and failures are handled. In Kaizen, successes are celebrated by all teams and failures are seen as an opportunity for learning (more on that in a separate post). That means when a team stands out as having dragged down performance the reaction of the other teams is not a “shame on you” but a “let us help you.”
Quality visualization in a development environment is surprisingly rare. I have seen very few teams where the first thing folks see when entering the development area (and when logging onto the intranet or wiki) are charts of quality metrics. That is all the more surprising as many of these can be collected automatically (unlike on a shopfloor where there tends to be a fair bit of manual effort). Site or service uptime and latency, number of bugs at varying levels of severity, time to close out bugs, missing or broken checkins, unittest results, etc can all be gathered in an automated fashion. Breaking this out by team may take some effort, but most folks aren’t even displaying aggregates, so there is a lot of room for improvement.
One of my favorite Kaizen techniques is visualization. On the shopfloor this takes the form of large signs that graphically display key quality metrics. The charts show overall trendlines but also break out individual teams. This is a powerful motivator. When there are large gains in quality, the credit can go to the team(s) that produced the progress. Conversely, when the overall chart shows a dip or a slow down in improvement, it is often readily apparent which team is responsible. Some people may find the idea of this level of transparency uncomfortable, but much depends on how successes and failures are handled. In Kaizen, successes are celebrated by all teams and failures are seen as an opportunity for learning (more on that in a separate post). That means when a team stands out as having dragged down performance the reaction of the other teams is not a “shame on you” but a “let us help you.”
Quality visualization in a development environment is surprisingly rare. I have seen very few teams where the first thing folks see when entering the development area (and when logging onto the intranet or wiki) are charts of quality metrics. That is all the more surprising as many of these can be collected automatically (unlike on a shopfloor where there tends to be a fair bit of manual effort). Site or service uptime and latency, number of bugs at varying levels of severity, time to close out bugs, missing or broken checkins, unittest results, etc can all be gathered in an automated fashion. Breaking this out by team may take some effort, but most folks aren’t even displaying aggregates, so there is a lot of room for improvement.
No comments yet