Venture-Less Capitalists Chase Their Tails

Through a series of comments this week the Venture Capital industry has proven once again to be a highly fragmented group of independent thinkers who will likely never play nicely with each other. A great article in the NY Times on the state of the Venture Capital market shows contending visions for the path back to glory… from removing firms from the market to adding firms to increasing investment and fund size to lowering it. And as you might imagine, they all think they’re right. Don’t we all?

What ever happened to raising money and investing the right amount in the right companies? It seems like all of the current debate is regarding the sweet spot of investment size and the sweet spot of fund sizes . In an industry that thrives on the ability to find a unique diamond in the rough and the savvy to understand and catalyze the creative class I’m surprised that they think it should be so simple. Really though, since the industry is broken, and they broke it, should they really be listening to their own advice as they try to piece together a solution? This all quite circular if you ask me.

Venture Capitalists are used to giving lots of advice. I think maybe it’s time that they sat down and asked some entrepreneurs (one of their customer bases) what they need. Oh… I think I’ve heard that exact advice in the VC boardroom before 🙂

Entrepreneurs need:

  • Financial partners who think operationally
  • The right amount of capital at the right time
  • Patient, level-headed board members
  • VC firms made up of former company operators (up to a maximum of one Wall Street financial numbers guy among the partners)
  • A clear understanding of the mutual expectations of the relationship
    • What is currently the biggest opportunity in our market?
    • How often will we talk about the business (daily, weekly, quarterly, annually)?
    • How often will we receive advice from you?
    • Are you providing money or leadership or both?
    • Who exactly will you be introducing and promoting us to?

What’s the Problem? Global Warming is Really Cool.

Please excuse the painful play on words in the title of this post.

I’m a big fan of the movement around Global Climate Change right now and if nothing else it should help the US and other industrial powers stop polluting the world for the rest of civilization. Let it increase the prices we pay for energy. We’ll deal. The price we pay for almost anything these days is really not the real “price” in terms of global impact so an increase in prices is reality and the right thing to do. Over time if the demand for clean energy increases as it should the prices will come back down through innovation and the proper global sourcing of effort.

Anyway, my brother-in-law sent me a link to this article today and I found it very interesting.

What if global-warming fears are overblown?

This is the first decent scientific argument I’ve seen against global warming, or at least against the urgency that the current movement around Global Climate Change is professing. I love a good counterpoint to get me thinking and this one impressed me.

It raises a great question… is our strategy for measuring global temperatures flawed? It claims that over time (let’s say the last 100 years) temperature sensors have gone from being located in more rural areas to more urban areas where higher heat near the sensors is now caused by the mass of materials that are common in urban environments. Furthermore it defends this position by stating that by far the greatest increases in temperature occur in readings taken at night which under his theory makes sense because large buildings and concrete in urban environments hold heat over from the day into the night.

In addition to this argument the article also states that the volume of new ice forming in the Southern Hemisphere greatly exceeds the volume of ice melting in the Northern Hemisphere. This means that the total amount of ice on earth is actually currently increasing although we are most often shown pictures of receding glaciers throughout Europe and Alaska.

Furthermore, the scientist pushing all of these positions claims to have never taken a single dollar from corporations or the oil industry. Very interesting. Either this scientist is really on to something or he just knows how to perfectly craft a counter-argument that is very difficult to disprove. He was certainly the President of his debate club in college. Or maybe Al Gore stole his wife.

GM & Segway Partner on PUMA Urban Vehicle

Allright, this is sweet… especially to a guy who loves Segways (that’s me). GM and Segway are working together on a two seater mini-car that they hope will decongest big cities (some pictures). They’re calling it PUMA for Personal Urban Mobility and Accessibility. For one, it’s really cool. It has two wheels (plus four more little training wheels on the prototype) like a Segway but has seats like a car. In fact, two little seats. Secondly, it will use energy as efficiently as a car that gets 200 miles per gallon. Fantastic. They make a few good points about the demand for this vehicle probably coming from outside of the US where bicycles are currently a primary method of transportation.

Although I do understand that an electric vehicle is clean energy and if it replaced normal vehicles it would remove some congestion from big cities where cars are the primary mode of transportation, I don’t get how this would really benefit cities where everyone bikes. You can fit more bikers in an area than PUMAers and biking is clearly a very clean energy. So, at the moment it seems that the target market of their product wouldn’t receive the primary benefit of this vehicle. Regardless, it’s a Segway so I think it’s totally cool. I drive two miles to and from work each day so a super lightweight vehicle that I could plug in at night that had 35 miles of range during the day would be more than I need. Thank goodness I can take back roads (two lane roads) to work. This thing would become a pancake in an instant on I-40.

Eco Cycle Bicycle Storage System in Tokyo

My business partner at iContact Ryan sent me the neatest thing in an email tonight. It’s called the Eco Cycle. it is an underground bicycle storage solution currently being used in Tokyo, Japan. It reminds me of those cart dispensers in airports where you can rent a cart to help you move your heavy luggage to and from baggage claim. But this is even better. This system not only accepts your own bike as input but it stores it neatly out of sight underground in a huge stacked cylinder. The system appears to be a whole lot less space efficient than the common side-by-side bicycle rack you can find on any street in a bicycle friendly city but it has several key advantages.

The first advantage is security. Bicycles can be expensive and at the end of the day finding your bike missing is not only a loss of money but also a loss of a ride home that may cost you dearly in a cab or on foot.

The Eco Cycle system is also much more visually appealing as it has a similar visual footprint as an ATM.

I would bet that the system also provides a benefit to the long term maintenance costs of a bicycle by keeping it stored in a dry environment out of the elements and the wear and tear and risk of frequent passers by.

Finally I think the advantage of using underground space instead of above ground space is exceptional. In busy cities where a high percentage of commuters move around via bicycle the space to park those bicycles along a sidewalk is a hindrance to pedestrians who use the sidewalk space to walk.

Considering the moderate failure rate of other automated systems we use every day like parking payment systems and drink vending machines I wonder what you would do if this machine ate your bike. Hopefully there is someone on site with a ladder and access to a hatch in the floor.

The Eco Cycle is a fantastic innovation and I love how it resolves an interesting problem in urban planning with a very creative solution. Has anyone seen one of these in person? It almost seems too good to be true.

If You Don’t Remember the News, Google Will Make You Repeat It

Internet Retailer decided last week to break the news of iContact’s June 29th 2007 funding… nicely timed on their website on September 4, 2008. I was a bit confused to see the headline “E-Mail marketing and blogging provider iContact gets $5 million in funding” arrive in my inbox from my “iContact” email alert I’ve configured with Good work Internet Retailer.. it’s only a bit over a year out of date.

For me the untimely news release with its automatic relay through Google News was good for a laugh and a minor inconvenience. I deleted the news alert without giving it much thought. Only today did I realize what this type of mistake could actually do… and it did.

This Monday (September 8th) the South Florida Sun-Sentinel newspaper accidentally republished an article from 2002 announcing United Airlines’ (UAL) filing for bankruptcy. In an identical fashion to how I received the reposted and outdated iContact funding article the Google News alert system noticed the target keywords included in the article (probably either UAL, the stock ticker for United Airlines, or the name United Airlines itself) and quickly sent out emails to all recipients with alerts registered for those words.
As it turns out, a number of recipients of that email were people with their fingers on the pulse of the public stock markets and upon seeing the article on the South Florida Sun-Sentinel website they quickly began selling UAL’s stock. Within the day UAL on Nasdaq fell from $12.50 to $3.00 at which point the market froze further trading. Although the stock rebounded to $10.60 the following day the difference represents a 15% drop in value due greatly to the accidentally posted article in South Florida. I’ve included UAL’s five day chart below from Yahoo Finance to illustrate the activity on Monday.

Taking a second look at the Internet Retailer article again I’m noticing something interesting. At the bottom there’s a single paragraph about Sendmail Inc that mentions their revenue growth and one of their products by name. Sendmail is in a similar industry at iContact. They are a provider of email sending infrastructure including software and hardware although they aren’t a direct competitor of iContact. I’m wondering if the tactic being used here is to republish an old article about a company with a visible brand name likely to have a lot of Google News alerts configured for it and to then include news about a company hoping to get their message in front of that same audience. In concept it’s a brilliant way to segment an audience and then use Google as the conduit for your message at zero cost to you.

I think this could get someone in a lot of trouble because it does involve using a trademarked (most likely) brand name to the benefit of someone other than the trademark’s owner. I’m not sure if this technically is illegal but clearly using a trademark in a confusing way to promote another brand or company or product is illegal. Although I feel like I’ve discovered a brilliant new guerilla marketing tactic and have discovered the cause of the reposted article mentioning iContact some facts point the other way including the fact that I couldn’t find an earlier posting regarding iContact’s VC funding on Internet Retailer site and the fact that the blub about Sendmail mentions their revenue over the first six months of the year. Had this content been first written in late June 2007 when it was actually announced by iContact the timing mentioned in this statement would make perfect sense. Although maybe the piggyback news alert concept was actually utilized back in 2007 if the article went live online then for some period of time.

One Laptop Per Child Worldwide Match Program Announced

I had the opportunity to take a One Laptop Per Child foundation XO laptop for a virtual test drive at the Fortune Brainstorm Technology 2007 Conference in San Francisco last year. Last month I was at home and my dad had one at the house for a while, courtesy of some of his friends at the university, so I got to spend a few hours with the machine, getting online and cruising around on its Linux operating system. I was very impressed. The laptop is lightweight, feels very durable, and didn’t seem unnecessarily slow. After many years of work the OLPC foundation has created a very convincing product in the XO, although it’s for sale for $200 not the $100 that was originally targeted. I don’t think it matters, they have made and continue to make their point… and a difference in the world. I’m excited to watch their progress over the next five years.

In the fall of 2007 Nicholas Negroponte‘s OLPC Foundation announced a buy-one give-one program in which Americans and Canadians could purchase two of their production XO machines for $399. One would be shipped to the person purchasing the machine in either of these countries, the other would be shipped to a child in a developing country at no cost to that country or the child.

On Tuesday afternoon at the Fortune Brainstorm Technology Conference that I’m attending in Half Moon Bay, California, Nicholas announced that they would soon reinstate the buy-one give-one program but that this time they would allow people in any country worldwide to participate. The crowd was excited.

Nicholas was sitting beside David Kirkpatrick from Fortune about 10 feet from me as he unveiled and displayed for the first time the XO laptop running Windows which he described was a critical milestone to getting widespread international adoption of the computer. He said this was not because he expects countries to purchase Windows licenses for all of the laptops they will buy but instead because knowing that they could install Windows on the machines gives them the confidence to invest in the Linux OS version machines and know that they can upgrade the operating system if at any point they have the resources or the need to do so. I think this is interesting and I wouldn’t have thought this at first because anything required to keep the cost down is critical for this project but the idea of being prepared for expansion is important.

Chevy Tahoe Create-Your-Own-Ad Campaign

Hopefully you didn’t miss this but topping the list of stupid web 2.0 ideas was Chevy’s create-your-own Tahoe ad series from the Spring of 2006. Basically Chevy provided the video clips and allowed amateur editors to piece them together in whatever way they saw fit and to then upload them to Chevy’s website. As Wired magazine describes it “The wikification of the 30-second spot – what could be more revolutionary than that?” And what could be more web 2.0 than that? Chevy was clearly setting themselves up for a victory from the collective creative mind of the masses.

The volume of the responses was a success as nearly 30,000 renditions were submitted. But, in result, the publicity came not from the videos that most creatively promoted the Chevy Tahoe and not on the Chevy website at all but of course from the ones bashing everything Tahoe stands for quite openly on YouTube. And the best part was, the parody ads nearly look professionally created, because Chevy put the fodder for the whole thing right into the creators’ hands. Wired has a great article about the whole thing but many of their links to the YouTube videos don’t work. I was able to locate two of my favorites though, these really make me laugh: and

University of Florida Creates the Flying Saucer

I was stopped in my tracks this evening while perusing my daily TechJournal South news update that I receive via email (powered by iContact of course). The headline mentioned something about flying saucers and plasma power so I had to check it out. As it turns out, researchers at the University of Florida have completed a design for a sort of flying craft powered by a new way to create the movement of air which ultimately creates lift. If it wasn’t enough already to win back-to-back national basketball championships (although that never upset me too much because it took some of the honor away from Duke who did it over fifteen years ago when they were good :)) and the football and basketball national championship within the same year, now they have to go and invent a flying saucer. I seriously need to pay this place a visit. Share the love Florida!

This has to be one of the most creative things I’ve ever seen. Apparently by sending current or a magnetic field through a conductive fluid the surrounding air is turned into plasma and through this process creates swirling of the nearby air which creates lift. They also claim that this movement of air is stable enough to properly support the weight of the craft even with some regular wind turbulence. The next step will be to build a small prototype of their design (about six inches in diameter) which I guess will help them figure out if their idea is actually possible in practice. The good news, with all of the electromagnetic interference that this thing will be creating there’s absolutely no reason you’ll need to turn your cell phone off during takeoff. Not that there’s any decent reason now.

Fortune Brainstorm Technology Conference Questions

I’m attending the Fortune Brainstorm Technology conference in California in July again this year. Last year’s event was incredible and the quality of the speakers and panelists is unrivaled by any event I’ve ever attended. This year they sent me a survey in advance to ask for answers to three questions, likely for use in selecting break-out session topics based on the collective interests of attendees. I’ve included the questions and my responses below:

1. What is the most exciting technology innovation you’ve seen in the past 12 months?

Technology that extracts complex concepts from video streams, IE: determines context for ads alongside streaming video by understanding what the viewing of the video is seeing. Also, wireless electricity. Current prototypes out of MIT are using resonant energy transfer to dial-in an induction signal from up to 15 feet. The potential among business and personal uses of this technology in practice is incredible.

2. What is your biggest hope or fear for the future, and how does tech relate to it?

My biggest hope for the future is that we can actually build enough clean and renewable energy generation infrastructure to support our demand at a reasonable cost. I think this is an area where the United States could become a worldwide technology leader if we cared enough to make the right investments at the right time.

3. What should be the top priority for the next US president?

Repairing our international reputation to restore the world’s faith that the United States cares deeply about innovating and investing to make the world a better and more peaceful place not just for us but for everyone.

Theory on Visual Data Recall Efficiency

Teaching Machines to Retrieve Visual Data Like our Brains.

I was trading emails on the topic of TED presentations with my friend Charles a few weeks back and his selection of preferred presentations, which were all focused on visualization technology and concepts, prompted me to come back to an idea I’ve had for years. I’ve provided my first written draft of the concept below, following links to the TED presentations referenced within.

Ted Talks – Johnny Lee: Creating tech marvels out of a $40 Wii Remote

Ted Talks – Blaise Aguera y Arcas: Jaw-dropping Photosynth demo

I have a theory that we can teach machines to deal with data like our brains, and that if we can accomplish this then user interfaces sitting in front of large data stores will automatically become simple, efficient, and super quick without any loss of quality as we perceive it. Perception being the key concept we can leverage here.

The example I’ve always presented is the following:

  1. think of a beautiful waterfall that you’ve visited before, picture the waterfall in your mind right now,
  2. think of the exact point where the falling water hit the rocks or pond or stream below, think about the detail of the water and the rocks, think about where you were standing as you gazed at the waterfall.

Now, if you were able to do this then my point is made. When your brain recalls visual memories it doesn’t recall every detail immediately, it brings in detail as-needed. As I forced you to think about specific elements of the waterfall scene your brain loaded the necessary additional detail into memory. These details were entirely irrelevant to your initial visual memory of the waterfall when I first prompted you to think about it and only became necessary as you “navigated” it during the following seconds.

But, in the digital world if I asked my computer to show me the picture I once took at a waterfall, it would have been required to load every pixel of information about the scene, thus requiring the maximum data load up front before I could begin any processing or navigating within that image. But to my brain and my recollection of that scene most of that data is absolutely irrelevant upfront. So, the challenge is, how do we teach machines to retrieve only the minimum amount of information we need at the time we need it in order to achieve the efficiency our brains utilize every day. When you think about it, our brains are massively efficient because of the ability to only retrieve exactly the minimum amount of information we need to process a thought. Machines are far less accurate and have much to gain from operating like we think. So a good place to start thinking about a process for brain-like efficiency is within a large visual user interface (reference: Johnny Lee’s presentation) where normally the greatest amount of information is pre-loaded by default. Now, in our new model, you ask your computer for the waterfall scene and it loads a very small image that contains all of the colors, shapes, and other critical elements of the scene but with very little detail, then as you click (the tactical parallel of what you did in my exercise above to direct your brain to focus on each specific element I prompted you to visualize in detail) the image enlarges in size and allows you to drag, rotate, and spin (the same thing your brain lets you do in your visual memory). Thus, the statement that the amount of data in the scene isn’t the problem (reference: Blaise Aguera’s presentation) because you the user can only comprehend and utilize a finite number of pixels within your viewing area rings true. Your brain has a similar pixel resolution (figuring it out may be the answer to all of this) in which anything that isn’t currently represented within that frame of resolution is of absolutely no concern to you. Like the way the brain interprets motion, (ie: 30 frames/second is fast enough to trick the brain into seeing motion when instead we’re displaying quickly changing still images), there may be a simple pixel resolution threshold of detail for the average human brain.

Here’s how I would test this. I would project a wall-sized image using a high definition projector on the wall in front of me. I would then install an eye-tracking device near the screen so that I could face the screen and have my eyes tracked instantly. You could use a four-point calibration tool for this just like the Nintendo Wiimote guy does for his infrared whiteboard, but in this case you would point your eyes at each of the four points to calibrate your position in comparison to the display screen. You could even take this to the next level by using head-tracking infrared so that after calibration you could actually move around the room with your changing position constantly recalibrating (but that’s for version 2.0). Anyway, using my current example, the computer would contain all of the information about the waterfall scene but would only display high resolution detail for the exact number of pixels my brain can comprehend directly around the area where I focus my eyes. So, if I look away the screen goes back to a very simple low-data environment displaying just the exact number of pixels my brain can comprehend across the entire area of the screen. If my brain can only comprehend about 5,000 pixels then an entire 100 inch screen would go back to a very simple representation of shapes, colors, etc while I focused away from the screen. Theoretically if we get the pixel count (resolution) right I should still be able to recognize the scene on the screen in my peripheral vision. Then, the second I focus back on the screen the image would recall the detailed pixel information it needs to complete the area of focus around where my eyes are pointed. Think about it, that’s nearly the world we already live in. Think about what’s in your peripheral vision right now, it’s very low-data, think about what your eyes are focusing on right now, it’s very high-data.

The ramifications of this are incredible. Think of the efficiencies. Here we are trying to drive high definition streaming video into every screen in our house and on our mobile phones and we’re eating up tons of bandwidth and processing power doing it. When instead, our brain needs very little information about an image in order to actually “see” it. Just in the video game world alone the gain in processing power would be immense and would enable all types of improvement and optimization. Entire 3D worlds now only need to be computed in the smallest amounts with only the appropriate detail provided on-demand when the brain requests it by pointing the eyes at a specific element. This creates a digital world of the perfectly optimized visual user interface, point your eyes at it and you see it, look away and it nearly disappears.

My hunch is that if you created the HD projection screen environment I mentioned above that you would actually not notice what was going on, because every time you looked at something it would appear in great detail instantly. It might be frustrating to try to look quickly at a section of the image in low resolution, since you would know that it existed just outside of the focus of your eyes. If done fast enough (and with the data optimization inherent to this model it could be done extremely quickly) you shouldn’t be able to tell a difference, IE: the same trick being applied here as the magic of the motion picture. Now, as an onlooker who the image would not be reacting to you would see what was going on instantly and it would probably annoy the crap out of you. But, additionally, this technology could support multiple people at once by only providing detail to the area of focus on the image that at least one person in the room was looking at at any point in time. As with most images, especially moving ones, the areas that we focus on are actually quite limited and probably pretty consistent from person-to-person so much of the optimization I predict here should remain even in a multi-person environment.