Software is Eating Software

Earlier this year I had a blog post about how writing software applications in-house is making a comeback. I was reminded of this when I attended the HackNY hackathon at NYU over the weekend. I was there to help Susan with representing Ziggeo, her startup, which provides an API for video recording and playback. The number of APIs and open source projects out there that do amazing things is simply staggering.

Watching some other NY based startups present their APIs and feeling the excitement among the students I felt inspired to write a little hack of my own. Basically the idea was to let anyone record a video and then analyze its contents. Without access to big and powerful services this could be a project for many engineers to work on for many years! Here it wound up taking me a couple of hours.

I used Ziggeo for the recording and video hosting. I then had a bit of Python code to pull down a video from Ziggeo after recording and cut it up into frames using the open source ffmpeg. I then submitted the frames as images to Clarifai for image analysis and reported back the results. You can try it out here. It is fun if you hold up different objects in front of the camera. We have even recorded our dog. There is a lot more one could do to present the results more elegantly but I was pretty pleased for a few hours of work.

Why does all of this matter? Because right now there is a fallacy in Washington and elsewhere that there will be a nearly unlimited future demand for programmer. There is an idea that we just need to retrain as many people as possible and this will solve some of our labor market woes. With software eating software though any extrapolation of current hiring needs is likely to vastly exceed the actual needs.

This is not to say you shouldn’t learn how to program. I have written before about the importance of programming as a way of understanding. But neither individuals nor policy makers should see this as a silver bullet for employment.

Posted: 20th October 2014Comments
Tags:  software programming hackathons ziggeo employment

Tech Tuesday: Concurrency (Conclusion)

One of the things I always realize as I write Tech Tuesdays is just how much there is to potentially know on any given topic. Entire books have been written on concurrency alone and there is already a long list of research papers published in 2013. Sometimes this realization can be genuinely daunting — any one human can know so little of all available knowledge. But I digress. Today’s post will be an attempt to write a conclusion to this mini series on concurrency.

As I pointed out at the end of the previous post on locks and mutexes, sometimes the cure to concurrency problems is worse than the disease. One field in which this is particularly acute are realtime systems, such as the computer in your car.  That computer is responsible for a lot of different things from operating your navigation system to deploying your air bag. One approach would be to use concurrent programming techniques. But that could have some pretty problematic results. Imagine the program for the airbag not being able to execute because it has to wait for some other program to release a needed resource. That could have some dramatic consequences.

So car computers tend to use a different approach. They run only a single program which takes predefined turns carrying out activities. For instance, it may check the conditions for air bag deployment every 100 milliseconds.  This approach has a lot of waste involved as (hopefully) most of the time when this check is carried out nothing needs to be done. But it is entirely predictable. And in this situation there is a very high premium on predictability of program behavior!

One way to build such a system is to have a master loop running that calls subroutines one after another (eg a subroutine for the air bag, one for the navigation, and so on). What’s critical to making that approach work is that the subroutines need to return in predictable (and usually very short) time. Another area where this approach is used a lot are user interfaces. For instance, I am typing this in a web browser on Tumblr. As I type, move the mouse, click on things, etc, we could either be jumping back and forth between different programs or execute a loop that invokes little pieces of code one after another to “work off” the user events. In this case the master program is known as an “event loop.” In fact there have been whole operating systems based on this kind of “cooperative" approach.

So why don’t we do everything that way? Because modern hardware has gone a different direction. Today’s CPUs all have multiple cores. A core is essentially a separate set of code running. And so we can’t have a single event loop but instead are back to having multiple programs executing concurrently. Because of this realization a lot of work is going into developing non-blocking algorithms and data structures. The basic idea is simple: let multiple programs work on the same things by allowing for the modification of data structure without blocking (hence, no deadlock, no resource starvation, etc). This turns out to be quite hard to do in practice though. One enabling technology that people have been working on is so-called Software Transactional Memory.

I won’t go into more detail here as this post is already too long — proving my point from the introduction. So at least for the moment this will be the end to the rather long programming series on Tech Tuesday. We have now covered all the 9 questions that I originally set out when I compared programming to telling a person what to do. Next Tuesday I will probably run another survey to determine what to write about next with one option being working through an example based on my interest in neural nets.

Posted: 12th February 2013Comments
Tags:  tech tuesday programming concurrency

Tech Tuesday: Concurrency (Locks, Mutexes, Semaphores, Oh My)

Last Tech Tuesday, we learned about atomic actions as a way of dealing with the problems arising from concurrency. I ended that post by pointing to the limits of atomic actions — most notably that the operating system and/or database system cannot provide arbitrarily complex atomic actions as it cannot possibly anticipate all the needs of different programs. At the time I also raised the question as to how atomic actions can be provided in the first place!

Let’s start with the second question. One answer is that this is very easy to do if the hardware is able to guarantee that a set of instructions or potentially a single instruction can complete without any interference from other concurrently executing programs. As it turns out, very little is needed from the hardware in order to build more complex ways of managing concurrency in software. For instance, the ability to check and change the value of a memory location in one go would be enough. As it turns out with some fancy footwork such as Dekker’s algorithm it is also possible to achieve a similar result using software only without any explicit hardware support. 

Once we have a primitive atomic action or mutual exclusion capability we can use that to build up more complicated ways of managing concurrency. For instance, a so-called counting semaphore can be used to let a pre-defined number of programs (but no more) access a resource concurrently. Or we can use it to acquire a write lock on a row in a database, which we can then update without interference from any concurrently executing programs. The idea behind all of these mechanism is essentially the same: limit access to prevent conflict. Unlike an atomic action that means that arbitrarily complex sequences of activity can be carried out before another program is given access.

So let’s go back to our ATM problem from before and see how we can now solve it. Here is some example pseudo code

retrieve(account, balance)
if (balance > amount):
   balance = balance - amount
   update(account, balance)

The call to lock() will block program execution until it has acquired the lock on the account. It guarantees that only on program can acquire a lock at any one time. There goes our chance of getting rich by lining up thousands of simultaneous withdrawals at different ATM machines!

Does that mean all is fine? And if so, why would anybody want to just use atomic actions instead? As it turns out having more powerful capabilities for managing concurrency gives us opportunities to mess things up in other ways (as in “with great power comes great responsibility”). Here is just one quick example — the potential for two programs to deadlock. Consider the following naive implementation for transferring money between two accounts:

retrieve(balance1, account1)
if (balance1 > amount):
   retrieve(balance2, account2)
   balance1 = balance1 - amount
   balance2 = balance2 + amount
   update(account1, balance1)
   update(account2, balance2)

Imagine now that I try to send money to Susan at the same time as she is trying to send money to me. Two transfer programs start to execute concurrently and for one of them account1 is my account and for the other account1 is Susan’s account. So each program manages to acquire the first lock. Let’s assume each of us has enough money in the account to make the transfer so that we get inside the conditional. Well, now the program trying to execute my transfer wants a lock for account2 which is Susan’s account but that’s already held by the program trying to execute Susan’s transfer to me. And vice versa. Since locks block, neither program can continue to execute. That also means that neither of the initial locks will ever be released and the two programs will remain suspended forever. Any subsequent programs that need access to our accounts would also be stuck. In fact until the processes are forced to terminate both accounts would be inaccessible.

As so often in software, by solving one problem (the limited capabilities of atomic actions) we have introduced another (deadlock) that is potentially much more severe! Next Tuesday I will wrap up this little mini series about concurrency with a brief look at asynchronous programming with non-blocking algorithms.

Posted: 5th February 2013Comments
Tags:  tech tuesday software programming concurrency

Tech Tuesday: Concurrency (Intro)

This will be almost the last Tech Tuesday post in the initial series on programming. The basic premise of this series was that programming is a lot like giving a person instructions for how to do something. As I had mentioned in the very first post of the series and then again in the recap, one challenge is that we may not be the only one providing instructions. And from every day life we know how hard it can be to be on the receiving end of this.

Imagine for a moment working in a matrix organization where you report to two different managers (eg the regional head for North America but also the global head for your product). There are at two canonical ways in which instructions from these two managers could cause problems for you. First, they could directly contradict each other as in the global product lead wanting you to charge a high price for profitability and the regional manager wanting you to charge a low price for rapid growth. Second, the amount of work that the two pile onto you could add up to more than you have time or other resources for.

In all the discussion of programming so far we have (conveniently) assumed that our program was the only one executing on the machine. In reality that is hardly ever the case. Instead we have multiple programs running concurrently. Now I will use that term somewhat loose view of what that exactly means because there are some subtle differences between two programs truly running at the same time (as in a machine with multiple CPUs or a multi-core CPU, or a set of machines that are coupled to each other) and programs executing one at a time but in an interleaved fashion, but these won’t matter for understanding the basic issues.

With concurrency the machine faces exactly the same problems as the poor employee in the example above. If one program wants to set a value to high and the other to low which value should be chosen? And if two programs together need more than the available memory, compute or input/output resources which one should get them? This can lead to interesting problems even if all the code that is running is our own code. For instance, in writing code for a web site we often write it as if it were handling one visitor at a time. That of course works perfectly well when you have relatively few visitors to the site as modern machines are amazingly fast.

But what happens if your site is on the home page of Reddit? Suddenly lots and lots of visitors show up roughly at the same time. One way your web server may deal with that is by running firing off multiple instances of your code. These might all request a database connection, require memory to run in, and so on. And pretty quickly more resources are being requested than the machine can make available. So this is an example of the second problem.

Now imagine that your site allows users to upvote or downvote a news story. When you have only the occasional visitor one at a time there are no problems. But as your traffic surges you may have many users nearly simultaneously clicking the upvote and downvote buttons. A naive implementation might look something like this:

retrieve votecount
if upvote then votecount = votecount + 1
else votecount = votecount - 1
store votecount

If this is the only program running and only one copy of it is executing at the same time there is no issue and this will work just fine. But as soon as multiple copies of this are running concurrently we have an example of potentially conflicting instructions. Let’s say the votecount is currently at 10. If two copies are executing at the same time, then the following could happen. Let’s assume that copy 1 is slightly ahead. It retrieves a votecount of 10. Then copy 2 also retrieves a votecount of 10. Copy 1 has received an upvote and increases the votecount to 11. Copy 2 received a downvote and decreases the votecount to 9. Copy 1 stores its votecount and immediately afterward so does Copy 2. The new votecount is now 9. But it should be 10. The upvote was lost in the process. This is an example of conflicting instructions being improperly resolved.

Given the discussion of debugging and testing it should come as no surprise that concurrency tends to be the source of the most vexing bugs. When the code runs by itself everything is just fine. But certain constellations of simultaneous execution cause the bugs to surface! Next Tuesday I will talk a bit about how these problems arising from concurrency can be dealt with.

Enhanced by Zemanta

Posted: 15th January 2013Comments
Tags:  tech tuesday programming concurrency

Tech Tuesday: Testing

Welcome back to Tech Tuesday. We are still in the process of wrapping up the initial series on programming. In the interim review I had identified a couple of high level questions that had not yet been answered dealing with code that has been written. Just before the end of the year we looked at bugs and debugging and I mentioned that there are two types of testing: unit testing and black box testing. More on these today.

One key purpose of testing is to identify bugs before they are exposed to endusers. Software systems are often extremely complex and even seemingly small changes can have large impacts (future posts will deal with techniques for reducing complexity and dependencies). As a result we cannot rely on the idea that something was working previously and so should still be working now following a “small change.” Instead after any change we need to test again to check for new bugs and also for regression — features that stop working or previously “fixed” bugs that are now reappearing.

For a long time most of the testing that was done was black box testing. That means testing the entire system from an enduser perspective and treating the code itself as a hermetic black box (hence the name). Often this black box testing was done by separate Quality Assurance (QA) departments and carried out manually. I once worked on a large DoD project that involved hundreds of pages of manual test procedures that would be carried out by a separate contractor and literally reported on paper. There were many problems with this approach including infrequent testing (in combination with a waterfall model of development), human testing errors and most importantly an adversarial relationship between the QA and development teams. Over the years tool makers jumped into this as an opportunity and today if you are not running mostly automated black box testing you are doing it wrong.

Unit testing emerged much more recently and refers to writing automated tests for small units of code. They are made part of an automated tool chain so that they can be invoked whenever a change has occurred. It is somewhat strange that unit testing took a long time to emerge because in manufacturing automated checks of the quality of intermediate steps has been a key component of achieving overall quality for a long time. Part of the reason is was the separation of QA and development in the waterfall model but it was also difficult to implement this with bare bones lower level languages such as C.

In some approaches to programming the unit tests are written before the code for the unit. It is similarly possible to create black box test script based on a mockup of the user interface. There are many advantages of thinking about testing up front this way. Among other things it forces smaller units of code and more precise use cases. The tests can also serve as documentation. By making both product (black box) and development (unit) responsible for testing it removes the organizational friction of a separate QA department and dramatically speeds up the feedback cycle.

Bringing this all back to the original analogy of programming being similar to telling a human what to do, testing is thinking up front about how you will know if the person is doing what you asked them to do. That can be done by examining the output of their work (black box testing) and by looking at the intermediate steps they are taking (unit testing). Both clearly play a role in every day life!

With that we have almost reached the end of this initial series on programming. I am planning to write one more post next Tuesday on an important topic which is the source of much of the difficulty in debugging and testing: concurrency — the computer trying to do many things at the same time. This is similar to one person receiving instructions for what to do from multiple bosses which we all know to be difficult!

Enhanced by Zemanta

Posted: 8th January 2013Comments
Tags:  tech tuesday programming testing

Tech Tuesday: Bugs and Debugging

Today will be the last Tech Tuesday for 2012 as I will be taking another one of my information slim down breaks starting next week. In the review of programming topics covered so far, I pointed out that we have not yet looked at any of the questions that arise once we have given instructions to the computer. The biggest one here is does it all work? Or are there errors? How would we know and what do we do about it?

Program errors are known as bugs and finding and getting rid of them as debugging. It is sometimes claimed that this usage goes back to a story of Grace Hopper finding a moth in the relays of Harvard’s Mark II computer. That story may or may not be apocryphal and the use of the word bug to describe a fault or error is definitely older than that. In reading up a bit for this post I discovered that Lady Ada is credited, including by Babbage himself, with having found the first error in a program for his analytical engine (Google had a Lady Ada Doodle yesterday in honor of her 197th birthday).

I first wrote about errors in the context of syntax. Syntax errors tend to be the easiest to find as generally the program will either not compile or it will not execute. The compiler or interpreter will produce some error message that identify the offending lines of code and even what the specific syntax error is. Even then sometimes some hunting around is required, for instance in the case of unbalanced blocks (meaning a missing closing element for a block such as a missing “}”).

Most of the time though the bugs we are dealing with are of a semantic nature. We think the program means one thing but when the computer executes it it really does something else. A lot of work has gone into trying to avoid this by formally verifying the correctness of software. In areas such as say the software for space rockets it is easy to see why we would want to be able to do this before launching the rocket. There are, however, some obvious and some less obvious problems with being able to do this. For starters, how do you specify what the program is supposed to be doing other than by — drum roll — writing another program?

Because formal methods are laborious and limited a lot of code does contain bugs. Some are even sufficiently famous to have their own name, such as the off-by-one error or OBOE (if you make seven cuts, how many slices of sausage do you have?). The question then is how and when do you best find bugs and get rid of them? The answer to this has changed over the years in important ways as we have moved from a waterfall model of development to the agile model.

As I pointed out yesterday, agile development is really the application of ideas from continuous improvement to software. One of the key ideas is to have no or minimal interim inventory as it will hide production flaws. In agile we should have a little untested code — which could hide bugs — as possible. This is accomplished through techniques such as frequent code reviews and lots of unit testing and automated black box testing (a subject of some future posts). Having fewer bugs and finding the ones that do exist more quickly helps reduce compounding effects and makes debugging a lot easier.

Debugging is the process of tracking down and eliminating bugs, which are often tracked in a system such as Bugzilla or Jira. Some engineers absolutely hate it and others love it since it can be a bit like playing detective (especially when debugging other people’s code). Even just reproducing a bug can sometimes be quite challenging. For instance, the bug may occur only in a very specific configuration of the program and in result of interaction with either underlying hardware or other software. In fact, looking for the bug may change the conditions enough for it not to occur leading to a so-called Heisenbug.

Personally I am in the camp of folks who find tracking down and fixing bugs quite satisfying. My first major debugging experience was finding bugs in an accounting system that was written in an early version of Basic that allowed only two character variable names and the system consisted of 80+ separate program files. What was your most epic bug / debugging experience?

Enhanced by Zemanta

Posted: 11th December 2012Comments
Tags:  tech tuesday programming bugs debugging

Tech Tuesday: Programming (Interim Review)

I started the Tech Tuesday series on programming in April of this year with an overview post that compared programming to telling a person how to do something.  At the time, I presented nine different questions that this raises, such as which language to speak and which words to choose in that language.  Some of the early posts that followed dealt with the three foundational questions, such as the choice of programming languages, their syntax and semantics.

The next set of posts addressed the fourth question as to how code can refer to “things.”  The fundamental concept there were variables which allow code to deal with changing values of different data types.  Following that was the question as to how to break instructions down into smaller steps.  Here we learned about reserved words which are the fundamental vocabulary of a programming language.  We then saw how to use control structures to deal with conditional execution and loops, which speaks to the question of how to avoid repeating oneself.

In fact, all the posts since then have dealt in varying ways with those same three questions of referring to things, building up from small steps and avoiding repetition. These are really the central questions of programming.  We saw how functions provide means for building up more powerful words from the basic building blocks that can then be used throughout a program. Then we looked at data structures as ways of better describing the attributes of a thing or object and also of grouping things. Finally we considered algorithms as the way for thinking about how to use small steps to accomplish a task and implementing these in different languages, including one specific to the task.

We have not yet looked at the last three sets of questions which all deal with issues that arise once we have given some instructions (to a person or machine), such as how do we know if our instructions make sense?  Is what we want to happen is actually taking place?   What about contention with other instructions from someone else? The next set of posts will explore these issues, starting with looking at bugs, where they come from and how to find them.

After that I may attempt to walk through a complete example from beginning to end putting all these concepts to work.  As I think about that I am open to suggestions for what the specific problem might be.  So if you have ideas / requests, please let me know in the comments!

Posted: 27th November 2012Comments
Tags:  tech tuesday programming review

Tech Tuesday: Algorithms (Wrap Up)

Today’s Tech Tuesday post will wrap up the algorithms mini-series.  Last Tuesday I presented implementations of the same algorithm in five different languages.  I had mentioned in an earlier post that the particular problem we are solving — seeing if a sequence of numbers contains a number that is repeated at least four times in a row — belongs to a class of problems known as pattern detection or pattern matching.  Now there are much more general algorithms for this class of problems and instead of rolling our own, we may be able to use such a more general algorithm.  Here is an example of doing just that for our problem using something known as regular expressions in Javascript:

// numbers = "1 15 7 3 8 8 19 5 12 8 2 6 8";
numbers = "1 15 8 8 8 8 7 3 19 5 12 2 6";
if (numbers.match(/(\d+)\s\1\s\1\s\1/)) print("Found four times repeated number");
else print("No four times repeated number found");

That’s it!  Whoa, where did all the code go compared to the previous implementations?  Seems like the longest parts here are setting up the numbers and then displaying the result.  All the work is done by just one tiny piece of code:


How is that possible?  Here we are setting up our numbers as a string and then using a so-called regular expression — the /(\d+)\s\1\s\1\s\1/  — to match a pattern in the string of numbers.

I won’t explain here in detail how regular expressions work (another post) but think of them as a language for describing patterns.  Saying \d in this language means, “match a digit”.  Putting a + behind it means at least one but possibly more of what comes before (so here at least one but possibly more digits).  Putting parentheses around the the \d+ means remember that pattern of digits (which is saying, remember this number).  The \s matches any whitespace character and a blank here separates two numbers. Finally the \1 is what’s known as a back-reference.  It says, match what you previously remembered. So the whole expression says: match a number and then see if that number occurs three more times immediately afterwards.

This is very efficient from a programming perspective as we had to write a minimal amount of code.  Someone else implemented the matching algorithm for us and we are simply describing what we want matched.  Since regular expressions are widely used their implementations tend to be both efficient and bug free.  Those are good reasons for using a more general algorithm that someone else has implemented either as part of a language, or as a library, or these days as a web service.

That doesn’t mean we couldn’t potentially do better with our custom algorithm in certain cases.  For instance, the trick above would be inefficient to use in a situation where the numbers are arriving over time as opposed to being known upfront.  We would have to re-run the regular expression over the entire sequence of numbers every time, which would get quite slow if the sequence becomes long.  Also, if the sequence of numbers consists of relatively long integers, the generalized engine of regular expressions (which operate on text) may be slower than the comparison of integers used in our custom approach.

Next week I will provide a brief recap that relates everything we have now learned about variables, data types, reserved words, control structures, functions, data structures and algorithms back to the questions raised in my original post on programming.  We will also come back to the idea that programming is about building up ever more powerful constructs which make it easy to solve problems.  In particular we will see that regular expressions are an example of something known as a domain specific language (here the domain is: patterns and pattern matching).

Bonus: there is actually a bug with this regular expression that will arise in certain cases.  Can you see what they are and suggest how to fix the problem?

Enhanced by Zemanta

Posted: 20th November 2012Comments
Tags:  tech tuesday programming algorithms

Learning to Program, Programming to Learn

One of the reasons that we decided to move to Chelsea was the proximity to the USV office.  The other was that our kids can walk to Avenues, a new private school with an ambitious program of building campuses in leading cities around the world.  Another distinguishing factor of Avenues has been their embrace of technology which got a good writeup in today’s Wall Street Journal.  Our kids each have both an iPad and a MacBook Air from school and use both of them heavily across a variety of classes.

There is one important missing component though so far and that is learning to program. That of course has been the subject of my Tech Tuesdays and I have in the past promoted Scratch as a way for kids to learn programming.  In that post I wrote that “the use of Scratch can and should be pervasive throughout instruction rather than being something taught separately.”  Here is just a short set of ideas for how to do that in different classes:

  • History - changing maps over time; graphical relationships between concepts and people; animated historical timelines
  • English - create word games; animated six word biography; create your own scene from a drama (“enter stage left”)
  • Music - create electronic compositions; visualize sound and music
  • Science - simulate experiments; graph the results from experiments; safely explode things
  • Math - illustrate the number line; create math games; show the relation between algebra and geometry

Having spent more time writing about programming and also talking to my kids about it, I have become even more convinced of the importance of integrating it into other classes.  The reason is that programming provides an exceptional way of learning a concept.  It is reminiscent of the saying that you haven’t really learned anything unless you have taught it several times.  Programming is “teaching” the computer how to do something.  If you can’t teach it to the computer you have probably not completely understood it.  Hence the “programming to learn” in the subject line of this post.

For instance, our kids are currently learning about identities and inverses in Math.  If they were simultaneously learning how to write a program that kind find the inverse of a number for either addition or multiplication, I am convinced it would give them a much deeper understanding.

Thankfully the team running Avenues is receptive to this idea.  I will be spending time at the school in early December to meet with several of the instructors to talk about Scratch and other ways to integrate programming into the learning experience.  It is something I very much look forward to. 

Posted: 19th November 2012Comments
Tags:  programming learning education Avenues

Tech Tuesday: Algorithms (Implementations)

Last week in Tech Tuesday, we continued our introduction to algorithms by looking at a first implementation.  As a reminder we are trying to find out whether a sequence of integers contains a number that is repeated four times in a row. Here is that Python code together with a bug fix for dealing with empty number sequences:

# numbers = [1, 15, 7, 3, 8, 8, 19, 5, 12, 8, 2, 6, 8]
numbers = [1, 15, 8, 8, 8, 8, 7, 3, 19, 5, 12, 2, 6]

counter = 1
if len(numbers) > 0:
    prev = numbers.pop()

while len(numbers) > 0 and counter < 4:
    next = numbers.pop()
    if prev == next: 
        counter = counter + 1
        counter = 1
        prev = next

if counter == 4: 
    print "Found four times repeated number"
    print "No four times repeated number found"

I will now show implementations in a number of different languages.  These have been written specifically to stay as close as possible to the original English language description of the algorithm.  Or put more starkly: these are not the best possible code in any of these programming languages (including the Python example).  There are ways to solve the problem that are more idiomatic for the respective language.  But for now the main goal is to show that understanding which algorithm solves a problem is more important than any particular language.

From Python I decided to go to Javascript next.  Obviously that required some syntactic changes, such as adding var to define variables, semicolons at the end of lines and parentheses to enclose blocks of code (the first two of these are somewhat optional in Javascript)

# var numbers = [1, 15, 7, 3, 8, 8, 19, 5, 12, 8, 2, 6, 8];
var numbers = [1, 15, 8, 8, 8, 8, 7, 3, 19, 5, 12, 2, 6];

var counter = 1;
var index = 0;
var prev, next;
if (numbers.length > 0) { 
    prev = numbers[index++];

while (index < numbers.length && counter < 4) {
    next = numbers[index++];
    if (prev == next) { 
    else {
        counter = 1;
        prev = next;

if (counter == 4) { 
    print("Found four times repeated number");
else { 
    print("No four times repeated number found");

You can see this code running here.  In addition to the syntactic changes, because Javascript doesn’t have a list data type, I chose to use an array which required introducing an additional index variable to let us move through that array (note: arrays are available in Python as well of course and so we could go back and make the Python code work that way as well).

From Javascript, the next language I decided to use was PHP.  That turns out to be an easy change: just switch the var declarations to a $ preceding the variable name and use array() to initialize the number sequence (run it)

# $numbers = [1, 15, 7, 3, 8, 8, 19, 5, 12, 8, 2, 6, 8];
$numbers = [1, 15, 8, 8, 8, 8, 7, 3, 19, 5, 12, 2, 6];

$counter = 1;
$index = 0;
$prev, next;
if (count($numbers) > 0) { 
    prev = $numbers[$index++];

while ($index < count($numbers) && $counter < 4) {
    $next = $numbers[$index++];
    if ($prev == $next) { 
    else {
        $counter = 1;
        $prev = $next;

if ($counter == 4) { 
    print("Found four times repeated number");
else { 
    print("No four times repeated number found");

Next up I decided to provide an implementation in C.  I went back and based that on the Javascript code because most of what I had to do was replace the var declarations with int (see the discussion of data types).  Here is the C code (run it)

int numbers[13] = {1, 15, 7, 3, 8, 8, 19, 5, 12, 8, 2, 6, 8};
/* int numbers[13] = {1, 15, 8, 8, 8, 8, 7, 3, 19, 5, 12, 2, 6}; */
int counter = 1;
int index = 0;
int prev, next;
int length = sizeof(numbers)/sizeof(int);

if (length > 0) { 
    prev = numbers[index++];
while (index < length && counter < 4) {
    next = numbers[index++];
    if (prev == next) counter++;
    else {
        counter = 1;
        prev = next;
if (counter == 4) 
    printf("Found four times repeated number");
    printf("No four times repeated number found");

The only other salient change is that in C we have to explicitly state up front how many numbers we will be looking at when we initialize the array.  I then use sizeof to find the length of that array so that if the array were to come from some other initialization process the remaining code would do just fine.

So far the syntax and reserved words of the languages used are amazingly similar and our code looks and feels much the same.  Again that’s because I purposefully avoided idioms from the respective languages.  Now just for the fun of it here is an implementation of this algorithm in Lisp.  This is written in completely non-idiomatic Lisp and without obvious shortcuts to follow the English language outline closely. I want to bring home the point that an understanding of how computation works is far more important than learning any one particular language.  So here is the Lisp code (run it)

; (setq numbers '(1 15 7 3 8 8 19 5 12 8 2 6 8))
(setq numbers '(1 15 8 8 8 8 7 3 19 5 12 2 6))

(setq counter 1)
(if (> (list-length numbers) 0) 
    (progn (setq prev (car numbers)) 
           (setq numbers (cdr numbers))))

    while (and (> (list-length numbers) 0) (< counter 4))
        (setq next (car numbers))
        (setq numbers (cdr numbers))
        (if (eql prev next) 
            (setq counter (+ counter 1))
            (progn (setq counter 1) (setq prev next))))

(if (eql counter 4)
    (print "Found four times repeated number")
    (print "No four times repeated number found"))

It will take a second to get past the syntax of parenthesis and the somewhat odd keywords (car and cdr? see if you can figure out from the code what they do).  But then the code is actually is quite easy to follow because it mimics what all the other code examples do closely.

In upcoming posts we will learn more about different ways to approach this same problem.  By that I mean both other algorithms and more idiomatic implementations.  The key take away though for now should be that for a company to say that they are looking for a “python programmer” is a lot like saying they are looking for a “Black & Decker furniture maker.”

Enhanced by Zemanta

Posted: 13th November 2012Comments
Tags:  tech tuesday programming algorithms implementation

Older posts