CPU and Memory are the New Crude Oil

The definition of a commodity is a good that is “supplied without qualitative differentiation.” You can’t charge more than others for crude oil, you have to turn it at least into gasoline. And if you really want to charge a lot more you have to turn it into a plastic product.

CPU cycles and memory are the crude oil of our times. If you want to charge more for them you have to add value. If you are SaaS company that provides a complete solutions you are adding a lot of value (plastic product). Conversely, if you are a developer platform you are just a smidgen above the commodity (gasoline).

So when thinking about margin, a SaaS business might have 80-plus percent gross margin. But cloud platforms aimed at developers will wind up being in the low double digits or possibly even in the single digits using total revenues as the denominator. Put differently, the financial metrics for the latter over time will look more like a refinery or a retailer.

From a long term investment perspective though it is important to keep in mind that ultimately what matters is total cash flow. If you are Walmart you make very little margin on any one purchase but you process a lot of them. So developer cloud platforms have to be scale businesses. It’s not a surprise then that another retailer, Amazon, is the leading cloud player and recently announced their 42nd price cut.

You can have  a great business as a SaaS company or as a cloud platform, just don’t mistake one for the other as how you think about financial metrics will be quite different. 

Posted: 15th July 2014Comments
Tags:  saas cloud commodity

SeeChange: Video Will Be Everywhere (What Do We Want?)

I recently finished Dave Eggers’s “The Circle” which provided a good challenge to my baseline view that more transparency is good and that data protection is a futile effort. One of the systems in the novel is an easy to deploy camera that anyone can point at anything and provide a livestream. Called “SeeChange” the promoters argue that it will provide for reduced crime, increased safety and just additional information all around. The detractors are either smeared (with planted information) or hunted down by aggressive mobs.

SeeChange doesn’t exist in precisely that form, but cameras are rapidly becoming cheaper and eventually a bundle of camera plus networking plus battery / solar recharge will drop below $50 and millions of always on cameras will be watching us. This is not a question of if but when. And there are startups, such as Placemeter which lets individuals contribute and be paid for video feeds to help with city data (including car and foot traffic flows).

I was thinking about this a lot over the weekend as Susan and I were driving a lot to visit two of our children at different summer camps. We were on the road for over 12 hours and saw a lot of police cars that had pulled over speeders. We too were speeding most of the time. It struck me that the existing system of enforcement is both arbitrary and inefficient. Conversely a system of cameras would let us raise the overall speed limit and more importantly make it adaptive to the conditions (eg visibility, rain). Tickets can then be issued automatically on the basis of reading license plates. Similar systems are already in use in Germany.

Here then are two possible extremes: one one end, we could try to outlaw such cameras or outlaw their broad based deployment. This seems like a complete losing proposition to me. Enforcement would be impossible (short of a dictatorship). We would wind up with police surveillance owned and controlled by the existing power structure. And if you doubt the potency of video to curtail official violence read the grim account of officers deliberately taking inmates to areas on Rikers without video cameras before brutally beating them.

The other extreme would be to actively promote more cameras and make all the streams public at all times. In The Circle this extreme also includes cameras (not unlike Google Glass) worn around the neck that broadcast what the wearer is doing — including their meetings and conversations, which politicians are using to “go transparent.” A centralized system would of course have a huge opportunity for manipulation so the extreme would be to have lots of these streaming directly or through many different systems.

There may be viable in-between positions which impose some limits (eg requiring disclosure of cameras inside of restaurants or other semi-public locations). This is a real challenge and one we should debate publicly with some urgency. Literature can and should contribute to that debate. Much as I am a fan of work like Egger’s and Doctorow’s it is almost too easy to write a dystopia these days. The real challenge, it seems to me, is to write a new utopia.

Posted: 14th July 2014Comments
Tags:  video surveillance dystopia utopia

More On Basic Income (and Robots)

Marc Andreessen recently wrote a post titled “This is Probably a Good Time to Say That I Don’t Believe Robots Will Eat All the Jobs …" — like all of Marc’s posts it is full of good ideas and worth reading. I agree with many of the points including the benefits of technology driven deflation and being long on human creativity to find interesting things for us to do. There is, however, a critical distribution question that Marc mostly avoids but is at the crux of the transition.

Marc writes:

Imagine 6 billion or 10 billion people doing nothing but arts and sciences, culture and exploring and learning. What a world that would be. The problem seems unlikely to be that we’ll get there too fast. The problem seems likely to be that we’ll get there too slow.

We could add other great activities to this list such as caring for each other, our communities and the environment. I too find that a desirable state of the world and I am optimistic that we can get there in the longrun. 

The transition is difficult though because pretty much all of these activities are either not paid at all (eg making music, cleaning up the environment) or paid poorly (eg teaching, nursing, open source, basic research). Now we can use crowdfunding mechanisms such as Kickstarter, Patreon, Beacon, Experiment, etc to pay some people for some of these and over time that can and will grow significantly. Still the money has to come from someone. And that’s a meaningful limitation at a time of great and growing inequality with nearly half of Americans without any savings (or net in debt).

A higher minimum wage, as vigorously argued for in an interesting recent piece on Politico, can inject some short term liquidity into the economy and I am sympathetic to that but it is also a very blunt instrument and still doesn’t help with the many unpriced activities. The same goes for government mandated shorter working hours or longer vacations (although I am pretty sure that Google’s founders did not have a government mandate in mind).

Marc suggests that we “[c]reate and sustain a vigorous social safety net so that people are not stranded and unable to provide for their families.” Our present approach to that though has gotten us stuck with a large government sector and complicated entitlement programs. 

This brings me once again to the idea of a guaranteed basic income. This is a potentially attractive alternative for a number of reasons:

First, it sets human creativity free to work on whatever comes to mind. For many people that could be making music or learning something new or doing research.

Second, it does not suppress the market mechanism. Innovative new products and services can continue to emerge. Much of that can be artisanal products or high touch services (not just new technology). 

Third, it will allow crowdfunding to expand massively in scale and simultaneously permit much smaller federal, state and local government (they still have a role — I am not a libertarian and believe that market failures are real and some regulation and enforcement are needed, eg sewage, police).

Fourth, it will force us to more rapidly automate dangerous and unpleasant jobs. Many of these are currently held by people who would much rather engage in one of the activities from above.

Fifth, in a world of technological deflation, a basic income could be deflationary instead of inflationary. How? Because it could increase the amount of time that is volunteered.

I will write more about how such as system could be financed. In the meantime suffice it to say that one of the (relatively few) roles of government should be the collection of taxes from companies and individuals (like myself) who have already benefited from technological change.

PS One way to think about a basic income is as follows: it removes a currently binding constraint on time optimization for many individuals allowing them to escape a local minimum — that in turn lets the economy as a whole adjust much faster (and with far less pain). 

Posted: 7th July 2014Comments
Tags:  basic income work labor robots

A Basic Income Experiment I Would Like to See (Detroit)

I have referred to a basic income many times here on Continuations so now it is time to flush out a bit more how this could ever possibly work. Many people do simple back of the envelope math like saying there are 319 million people in the US, so if you paid each of them $10K per year that would be $3.2 trillion which exceeds the annual federal tax base (which is about $2.7 trillion) and then quickly conclude that the whole thing is a lost cause.

My basic contention though is that the amounts for a basic income could be significantly less and still achieve the goals of letting local activity flourish. So with that in mind here is an experiment I would like to see: have the city of Detroit recruit up to a thousand people to one of their destitute areas with a basic income set at something like $400 per month. So the monthly cost of the experiment including some initial overhead might be $500K or $6 million for a full year. I believe the funds for that could be raised from people like myself who are interested in seeing such an experiment.

Now imagine three or four people sharing a house. They could easily afford the utility bill (unlike current situation in Detroit where almost half of households cannot even pay their water bill). As part of the experiment the city should also work to provide high speed internet at only slightly above cost in a utility model. By picking a relatively compact area this could be done wirelessly as a start to reduce the initial set up cost and time. Eventually if the experiment works the network can be expanded. There are plenty of houses in Detroit that are being either razed entirely or auctioned off in the low thousands of dollars, so housing should be the least of issues. Especially because people with a basic income would be excellent credit risks on a P2P lending platform such as Lending Club.

Here are two other components of the experiment that I think would be critical. First, there should be relatively little regulation on activity for instance to make it possible to do local farming, operate small schools, drive others around, etc. exactly the kind of activities that used to historically allow for people in communities to help each other. Second, I believe that participants for this experiment should be recruited and screened. I am not sure exactly what the right criteria would be but ideally they generate some diversity in interests and backgrounds (eg include people who already know how to renovate houses). One could think of this as a colony, not in a new geographic area but in a new social arrangement. Therefore initial recruitment is essential to increase the likelihood of success.

Would love to hear from anyone who thinks this kind of experiment would be interesting. Please also provide any and all feedback on the conditions for such an experiment that you think make sense (or don’t).

Posted: 24th June 2014Comments
Tags:  basic income experiment Detroit

Debating Disruption: Mind the Non-linear

Following the publication of Jill Lepore’sThe Disruption Machine" and Clayton Christensen’s vigorous response in an interview there has been a healthy debate around the merits and even existence of disruption in many posts and tweets. I had been busy preparing for and then teaching a computer bootcamp for our children and some friends so I had mostly ignored this debate. It is of course highly relevant to my claim that we are at the beginning of a massive transition from industrial to information society.

I was particularly intrigued by Nassim Taleb’s tweeted claim that

"Disruption" has to be BS as it is mathematically incompatible with the Lindy Effect. The 2 cannot coexist.

The Lindy Effect states that for many types of objects but especially for technology and ideas the best predictor of their expected future lifespan is how long they have already been around. I believe he pinpoints the crux of the debate, although he is wrong on a critical detail which is the power of non-linear change (something he certainly appreciates as it permeates his books including the fantastic Antifragile).

The reason a lot of technology persists is because in many areas change has only been incremental. We continue to use glasses, tables, silverware, etc. because these are technologies for which change has been linear/incremental (at best). But we drive cars to work instead of riding horses because cars represent a non-linear change over horses. In fact, horses turn out to have been eclipsed by non-linear technology changes in agriculture, warfare and transportation which is why we have gone from having 26 million working horses in the US in 1916 to a few thousand now.

Nowhere have we seen more exponential (ie highly non-linear) change than in information technology. We no longer use an Abacus, or a slide rule, or a mechanical tabulator, or a room filling mainframe, or a million dollar workstation because the phone in our pockets has more compute power than all of these! These technologies were disrupted by the exponentially better ones that came after them. And the Internet has been an exponential change for the size and scale of networks, which is why there are now networks that hundreds of millions and even billions of people participating in them. That too has and will continue to disrupt existing smaller networks and hierarchies.

Yet even in the world of information technology and networks the word disruption is used all too frequently and for things that aren’t in fact disruptive. Craigslist represented a non-linear change for the classified ad business. It took ads that cost tens or even hundreds of dollars and reached thousands of people and replaced it with ads that were free and reached millions. That’s a highly non-linear change and it resulted in a disruption of the newspaper industry. New web sites that are offering prettier listings, more functionality, etc. are not disrupting Craigslist as much as they tend to be closer to incremental improvements. Hence the persistence (much to many technologist’s consternation) of Craigslist.

So yes. The Lindy Effect is real. Disruption is rare. But it does exist and it is caused by non-linear changes in technology. We happen to be in a period of such non-linear change because we have figured out how to use computers for lots and lots of things, including machine learning and robotics, DNA analysis and synthesis, scanning and additive manufacturing of objects. As a result the exponential improvements in computers powered by Moore’s and Metcalfe’s ”laws” are invading many other industries. 

Posted: 23rd June 2014Comments
Tags:  disruption lindy effect non-linearity

Computer Bootcamp Day 2: Hosting Your Website

Today was the second day of our mini computer bootcamp for friends and family. The goal was to have a functioning and hosted website up and running.

We started out by using nslookup to find the IP address of a web server — we use ziggeo.com as an example. If you are on Mac OS you can simply do this in terminal (which we learned about yesterday and used again extensively today — I am not sure what the easiest way to do this is on Windows, but if you have your Raspberry Pi handy from yesterday you can use that instead). We then typed the IP Address directly into the browser’s address bar to demonstrate that it does indeed load the same web site.

We then learned about the Domain Name System (DNS) and that it allows computers to turn a domain name into an IP Address. I explained the structure of a domain name and what a fully formed URL looks like. We then talked about HTTP and how the request and response cycle works between the web browser and a web server on the other end, with an initial GET request potentially resulting in many more files being requested from the original and other servers.

We then used Developer Tools in Chrome to actually examine how all of these parts get loaded starting with the HTML of a page. We went to bot the elements tab and the network tab in the Developer Tools. In the network tab we looked at how all the components of a page, such as CSS and Javascript files and images got loaded. And we played around on the Elements tab with actually changing the content on the page.

We then proceeded to install the Apache web server on our RPis via sudo apt-get install apache2. To make sure that they were fully up-to-date we first ran sudo apt-get update and sudo apt-get upgrade (the latter runs for nearly 30 minutes). While we were waiting we dove a bit deeper into HTML and created sample files on our laptops which we loaded using  ”File -> Open File” from our web browsers. We included at least one unordered list using the ul and li tags as well as an image using the img tag.

Once we had our RPis running Apache we now used our locally created samples to replace the contents of /var/www/index.html (use cd /var/www first and then sudo nano index.html to edit). We had great fun visiting each other’s web sites simply by typing the IP Address of the RPi into the browser address bar. I explained that these IP Adresses were only valid on the local network and had been assigned via DHCP. I then showed how I could make any one of the RPis appear on the public internet by adding a rule to the firewall (and drew a diagram showing how the firewall separates the public internet from the local area network we were on).

During lunch we talked more about the HTTP request-response cycle and how there are different request methods such as POST, PUT and DELETE. Answering a question I explained how cookies are set and then included in subsequent requests to a domain and how inclusion of code from other domains, such as a Facebook like button which is served up from Facebook’s servers, means that when you visit that page, Facebook knows about it. We also talked about how web content can be cached at multiple layers such as Cloudflare, your local machine or the current browser session.

After lunch I spun up a cloud server for each student and one for myself at Digital Ocean. We used ssh to connect to our servers and once again installed Apache and started to edit index.html. We then learned how to update the zonefile at our domain registrar to make a domain point to our cloudserver. Thankfully Susan and I have lots of dormant domains and we let everyone in the bootcamp pick one for this exercise. Everyone was super excited how easy it was to have their own website up and running.

We then added a CSS file to our website by including a <link rel=”stylesheet” type=”text/css” href=”main.css”> in our html and started styling it with simple changes, such as background and font color and using sans serif as the font-family. We learned the basics of CSS syntax of specifying a selector such as a tag (or a class or an id) and then adding styles.

As a next step we added a bit of Javascript to our site using a <script type=”text/javascript” src=”first.js”></script> tag. At first we added a simple alert(“Hello World”) but then learned how to attach this code to an element on the page by using the onClick event on one of the HTML tags. One of the students pointed out that I had now inserted some Javascript directly into the HTML while I had earlier said that one shouldn’t put styles there. So I wound up explaining how to use the JQuery library to add the onClick event handler from the Javascript.

At this point everyone was happy but exhausted and we called it a day. We have left the cloud servers and websites up for now so that everyone who participated can revisit theirs when at home. This seemed like a big success and I can feel a follow up coming in the fall — I certainly thoroughly enjoyed myself!

Posted: 22nd June 2014Comments
Tags:  computer bootcamp website hosting learning

Computer Bootcamp Day 1: Raspberry Pi

This weekend we are holding a two-day computer bootcamp for our kids, some friends of theirs and a handful of adults. Here is a recap of the first day both as a reminder and reference for those who participated and as an outline for anybody else who wants to try this.

We started by watching the first minute or so of a clip showing NASA mission control. I then explained that for the Apollo 11 program NASA had five mainframes from IBM (System 360 Model 75 with 0.034 MIPS and 1 MB of main memory each). We then found that the Raspberry Pis in front of us come with 512 MB of main memory and can perform at 100s of MIPS. Put differently we were holding in front of us a computer that was 100 times more powerful than the entire computing infrastructure for the moonshot!

We put the RPis in this great case (purchased as part of the great RPi starter kit from Adafruit) and connected one of them to keyboard, mouse, monitor and then power. Of course nothing happened. Why? Because we had no operating system installed. I got everyone to search for how to install an operating system for the RPi. We then proceeded by downloading Occidentalis 0.2 (based on Raspbian Wheezy, which in turn is a variant of Debian, which is a type of Linux). We used the Raspberry Pi SD Installer to flash the Occidentalis disk image onto a 4GB SD card. We did this from the command line using terminal in OS X and I explained that the name terminal came from the original dumb terminals.

Since we had a few minutes while the SD cards were being written we played a little game. Two students paired up and one had to get the other to draw a letter on the whiteboard by giving precise commands about line strokes. After that we discussed how programming or computational thinking is something that we all do to some degree every time we explain to someone else how to do something.

We now had working RPIs and booted each of them up connected to a monitor, keyboard and mouse. I walked everybody through how to configure the keyboard layout, timezone, etc. and soon enough we had them running and fired up the Scratch environment to play around a bit. We then proceeded to install a wifi card and had our first encounter with the GNU nano editor. We also learned about why we need to use sudo if we want to edit a file such as /etc/network/interfaces which holds the wifi configuration information.

We then learned how to use sudo ifconfig to display the IP address of our RPIs (and write them all down on a big white board). I then drew a diagram that showed how all of our addresses were local to the network we were on and not reachable directly from the internet. We used ping and traceroute from the command line to examine the network between our RPis and also computers on the internet. Since there were 10 students in the group in total each with their own RPi but only 5 monitor and keyboard setups it was now the perfect time to learn about ssh and how to use it to connect to a headless RPi. In fact we took several of the RPis and distributed them to different locations around the room to really demonstrate that we were remotely connecting into the machines. For the two students with Windows machines we used Putty.

Now it was time to do a little bit of programming. We again used nano to write a small program in Python that adds the numbers from 1 to 1000 (yes, there is a simple formula for that but the point was to illustrate how fast the machines could do this brute force). We then also wrote the same program in C and used gcc to compile it. We noted that our C program executed significantly faster than our Python one. I explained that what we read and write is the source code which needs to be translated into machine code which are the instructions executed by the computer.

To show that we can extend our RPis we then all put together a very simple breadboard containing an 8x8 three-color LED matrix. We downloaded the sample code for lighting up the LEDs using git. There were a lot of screams of excitement when the LEDs connected to a headless RPi started to flash.

The plan for tomorrow is to write a short program to fetch some value from the internet (eg the weather) and then make the LEDs light up accordingly. We will then also set up a website on our RPis and finally do the same using a cloud server.

Posted: 21st June 2014Comments
Tags:  raspberry pi computer boot camp learning

Shapeways: Make Things With Code!

Shapeways is a partner for Google’s brand new MadeWithCode initiative aimed at getting more girls excited about coding and computational thinking. What is important about this initiative is that it will broaden the understanding of what can be done with code. Code is not just games or websites. Increasingly it is everything and everywhere (even in a long piece on computational thinking in MotherJones).

So what I am particularly excited about is how this initiative will introduce a many more people to the idea of constructing physical objects through code. This is a key enabler for mass customization and for creating objects on demand (rather than having wasteful warehouses full of stuff that may wind up in a landfill). Shapeways makes this possible and even easy by providing a powerful 3D Printing API and the incredible ShapeJS Javascript Library.

The MadeWithCode bracelet project is built on top of these. Anyone can use the Shapeways API and ShapeJS to create their own customizable or fully code generated objects. This opens the door to amazing possibilities, such as the automated creation of a perfect fit lightweight arm or leg brace instead of a traditional cast. I can’t wait to see what people come up with!

Posted: 19th June 2014Comments
Tags:  shapeways google code api shapejs

Unbundling of the Job

Last week I posted a draft outline for my planned book on “The Coming Information Age" with a proposed chapter on the "Unbundling of the Job." This is an idea that first occurred to me when I attended a workshop at MIT on Work and Value in the Digital Economy at MIT in late 2012. At the time here is what I wrote:

Do people need jobs or can we deliver what jobs provide some other way and in a potentially unbundled fashion? The “jobs of a job” include income, structure, social connections, meaning, and at least in the US, access to healthcare.

Let me elaborate on this idea here. Whenever I mention the idea of a Basic Income as a way of addressing the impact of technology on the labor market someone will reply “but people have to work!” (usually in an exasperated voice). When you then ask “why?” you get one of the following:

1. Because people need structure in their lives. The benign interpretation of this is a genuine concern about people being bored if they don’t have work. It is more likely though a secular variant of “idle hands make the devil’s work” — a longstanding suspicion that people will be up to no good if they aren’t working.

2. Because people need companionship and communication. It is absolutely the case that historically work was where we spent most of our waking time and was therefore the fulcrum for companionship and communication outside and possibly ahead of the family.  

3. Because people need meaning and they get meaning from work. In the US it is common to ask people “what do you do?” to find out what kind of work they do. This is often followed (implicitly) by a conclusion about what kind of person they are based on their work.

4. Because people need healthcare. Thankfully with the Affordable Care Act we have started to unbundle healthcare from the job. By doing so we have moved healthcare into category #5 below. 

5. Because people need to pay for food, housing, etc. That last objection is of course what the Basic Income is designed to address. I will write more about the math of that in a world of technological deflation.

None of these are really about work qua work. Rationales 1-3 completely ignore that other activities and networks can also meet those needs. More fundamentally they reflect a view that people are not capable of truly living in freedom, where freedom includes choosing what to spend one’s time on. We have come to hold this limited and pessimistic view as a result of centuries of systems that relied on the control of (most) people’s time and effort first for agrarian and then for industrial production.

In the upcoming posts I will write about alternative ways to address these needs without a traditional job. As a quick preview: people are creative (and will be more so if we change our education system) and will find interesting things to spend their time on. And importantly in an unbundling based on a Basic Income there can and will still be a market for labor — it is just that people will be in a very different bargaining position.

Posted: 17th June 2014Comments
Tags:  work labor jobs unbundling technology

The Coming Information Age (Possible Book Outline)

As I am blogging my way through the various aspects of a transition to the Information Age I am still keeping in mind that eventually I would like to gather up all the material into a book. So here is possible outline for such a book. Would love to get feedback on what’s missing (or too much), different possible orders and anything else that comes to mind. So please fire away with comments!

1. Introduction / Motivation

— fundamental premise: industrial -> information is a transition as important as hunter gatherer -> agriculture and agricultural -> industrial

— why now? because we can see the beginning of the changes, examples: self driving cars, face recognition, machine translation

— changes build slowly at first but then accelerate, example: up to 1900 almost 50% of people still worked in agriculture

— prior transitions did not go very well and we now have more destructive power at our hands than ever before (nuclear, biological, etc)

— much of the policy debate is stuck in the industrial age and amounts to re-arranging the deck chairs on the Titanic

2. Industrial Age Recap

— brought us extraordinary material progress - examples: air conditioners, cars, medicine, life span

— not yet equally distributed but have been making great strides in eliminating poverty globally

— fundamental engine of the industrial age: consumer demand -> products made with capital and labor -> wages to labor -> more demand

— capitalism became the winning model in the industrial age because entrepreneurs created innovative products to meet consumer needs

3. Industrial Age Breaking Down

— substituting machines for labor, so wages declining and eventually going away and with them the demand for more stuff

— initially counteracted this challenge through consumer debt

— already producing more stuff than we need to meet our material needs (hence the rise advertising)

— research shows that stuff doesn’t make us happy, experiences do, biggest growth need is spiritual (not material)

— facing species threats including environment, asteroids, disease that need to be addressed at global (not national) level

4. Information Age Arriving

— first time in human history that everyone on planet is connected instantly at no (marginal) cost

— information is non-rival - we should share as much of it as possible (eg how to build an electric car)

— combined with exponential improvements in compute power and storage

— makes many things possible that were previously impossible, examples: collectively creating and editing an encyclopedia, teaching a machine to translate between languages, find cures for diseases

5. Fundamental Change 1: From Hierarchies to Networks

— increased information flow allows for more cooperation and more motivation

— individuals become peers in networks

— networks are resilient and global

6. Fundamental Change 2: Technological Deflation

— automation, on-demand manufacturing

— asset utilization

— rise of consumer surplus

— importance of the information commons

— attacking healthcare and education

— energy?

7. Fundamental Change 3: Unbundling of the Job

— separating of income, meaning, daily structure, health insurance

— guaranteed basic income in a deflationary world

— educating for motivation

8. Fundamental Change 4: Moving Past the Nation State

— the fall of artificial boundaries

— the rise of cities

— from big government to information standards

9. Threats Along the Way

— incumbents against the Internet (eg re-imposing geographic boundaries, overextending copyright) 

— breakdown of attention and money in politics

— digital balkans and lack of empathy

— surveillance versus open sharing

— income and wealth inequality in the transition

10. How to Prepare

— individuals: fight for the Internet, pursue passion

— parents: motivation is paramount

— educators: make connections

— companies: empower networks

— governments: embrace transparency

11. Outlook: Towards Abundance

Posted: 12th June 2014Comments

Newer posts

Older posts