Companies I’m Interested In

Recently, I’ve been thinking of larger themes in the tech world that have piqued my interest. I wanted to share my thoughts and spark discussion amongst other potential entrepreneurs out there. Many of these themes are strategic approaches to the broader question of how AI is going to be commercialized into real products, but others are not directly related. If you’re pursuing ambitious efforts along any of these lines, I’d definitely look at funding or supporting them either personally or through Graduate Fund. I’d love feedback on these ideas so if you have it, please share it directly!

Enabling Shared Resources In High-Density Spaces

One of the fascinating trends over the last ten years is the rise of sharing resources enabled through our mobile phones. The first generation of these companies were the so-called “sharing economy” companies like Uber and Airbnb, but we’re now seeing the second iteration of these companies. In particular, people are taking advantage of the high density of cities to amortize the costs of expensive resources across many people so as to distribute their benefits at lower prices.

The scooter companies are the first of the new generation of shared resources. What’s next?

The scooter companies are the first of the new generation of shared resources. What’s next?

We’ve seen this play out in the transportation space with scooter companies (e.g., Lime, Bird) enabling low-cost last mile transportation. Transit has aligned well with this model because things move and technology can be leveraged to track and bring efficiency to these movements. However, this same concept can be extrapolated and applied to other problem areas. Bungalow follows this model for a cheaper, more flexible and social living experience. Flexe is using on-demand warehouses for last mile fulfillment. This isn’t a new idea. There have been low-tech examples of communal resources for many decades including laundromats, pay phones, and mailboxes. My central question: what new tech-enabled resources will be placed across cities in the US to democratize access to their services/benefits and why will technology be the differentiator that makes their distribution possible when it previously wasn’t?

Defensibility and differentiation will be key for ventures pursuing this model. Companies in the scooter market have struggled with differentiation and are dealing with an arms race on pricing and availability. This means success is driven by smaller margins and capital land grabs, which makes the return profile much worse for investors. So, what are some sources of potential defensibility:

  • Social network effects — Pokemon Go isn’t a physical product, but it is location-based and centered around key points in cities. As more people in a network are on Pokemon Go, it makes it harder to build a competitive game with equivalent social engagement.

  • Data network effects — Particularly for training better models in the long term. Aclima is one company which is cleverly using existing vehicles to continuously capture hyperlocal data on the environment. As time passes and they gain more density, the dataset becomes more longitudinal and local.

  • Proprietary technology — Particularly in relation to automation. I’ve seen a lot of people work on laundry folding robotics and automated grocery stores, but a better model is building these locations from scratch to put less dynamic expectations on the technology to start. Think Amazon Go vs. Walmart with robots in it. This is what Creator has done with restaurants and leaning into a fully integrated model should give them differentiated efficiency and quality advantages.

  • Brand — Brand is always defensible but brand is typically built over long time periods. Uber perhaps has built the strongest brand in terms of these companies as they’ve defined the category in the way Kleenex has for tissues. Brand differentiation, however, is hard, especially when a new player can become very visible and concentrated in an area as they can with these kinds of products (i.e. Lyft’s pink mustache stealing market share from Uber).

Full Stack Companies (Digital + Physical or Service)

I also love so-called Full Stack companies that are working on a combination of digital and physical or service-based aspects. As Keith Rabois says, “if you can’t sell them, compete with them.” There are a number of companies taking this model: from Atrium in the legal services space to a plethora of companies in the health tech space (Cricket HealthLivongo, etc.). One can’t, however, just take a legacy business or service and slap technology or a smartphone app over the top: Uber is a far better experience than giving taxi drivers a smartphone where riders can hail them is. Proper implementations of this model require deep thinking about what new end-user experience the technology actually enables.

Livongo changes the way we manage chronic patients by wrapping a service around a set of hardware devices. They’re expected to IPO soon, but more and more of these are coming.

The most compelling companies in this bucket are looking at how to make the entire stack orders of magnitude more productive with technologies like AI. These companies are running a manual process and capturing data on it so they can build models to automate it. Here, nuance in understanding the problem can be translated into a unique technical approach around a proprietary structured data set. If you don’t do this just right, however, you end up with a service business that won’t scale. Facebook M is an example of a project that took this model but struggled to figure out how to capture data in a way that enabled real model building.

The challenge with these models is that this typically takes a deep knowledge of what types of models structuring/capturing data in different ways will unlock (as well as a lot of experimentation). However, most ML scientists don’t want to spend their time working on data capture design, they want to work on modeling. This is a huge mistake for anyone who wants to be an applied scientist or use data to unlock new possibilities. At Curai, we’ve spent an incredible amount of time on the structure and format of data we capture and expect it to be a long term differentiator.

You can build a real scalable and sustainable business based on proprietary, defensible technology by starting with a 30% cost advantage and getting to 80% over 4–5 years.

Generally speaking, for these models to be transformational, the operational components of your business should scale logarithmically instead of linearly. You can build a real scalable and sustainable business based on proprietary, defensible technology by starting with a 30% cost advantage and getting to 80% over 4–5 years.

As an aside, “Full Stack” as a term also broadly applies to the future of hardware companies. Consumer hardware, in particular, has proven to be a tough task due to the capital intensive nature of hardware and the difficulties of sustainably acquiring customers. As investors are shying away from traditional hardware companies, there’s a real opportunity for companies who are looking at unconventional business models to make serious inroads. Many of the best companies here will deliver services around the product with a subscription model or sell directly to enterprises. Zoom’s recent IPO success was an early proof point here. At Totemic, we’re thinking about approaching these challenges with both: distribution through the enterprise and a valuable service packaged around a hardware device.

Hardware Development as a Platform

On the topic of hardware, I’m interested in companies that are reducing the cost of developing hardware and firmware. One thing that would make hardware investments much more palatable would be if we had the equivalent of platforms such as AWS and Docker for hardware. At some point, hardware development should be nearly as easy as developing software is today. This is a hard problem as it requires solving hard technical and operational challenges in a market that is somewhat immature, but if we’re looking 10 years down the road, there’s no way developing hardware will be as hard and costly as it is today. Otherwise, the development of physical products will be limited to big companies and as I previously pointed out, VC’s will lament the lack of non-FAANG platforms that their next generation of startups can build on.

There is a robust ecosystem for software development, but very little for hardware.

One way to solve this problem is to reduce the complexity and cost of building these products. Fictiv is one noteworthy manufacturing approach in this area, but there are many other approaches at cheaper, modular 3D printing from things as wide-ranging as shoes (Feetz) and buildings (MightyBuildings). This kind of deeply tech-driven, customized manufacturing is the way of the future. We’re also seeing companies like Jitx on the hardware side and Memfault on the firmware side which are the types of tools we desperately need to decrease development costs.

Building Hardware for AI

One of the recent major drivers in AI progress has been increased compute power. Some would even argue that it’s the only real source of progress. Many of the largest innovations have been clearly compute-enabled: large-scale training efforts like OpenAI Five and GPT-2, scaling RL w/ deep learning, GANs, and early brute force versions of Neural Architecture Search, etc. With the death of Moore’s Law, we need new innovations in how we continue to grow our compute power. Lots of folks are looking at this including Nervana (acq. by Intel for $400m), VathysCerebras, and others, but we need more and we need them to be more non-linear.

It’s worth noting that non-linear approaches here are where there is room for startups to win. Google, NVIDIA, and the other big guys will invest in the linear/logical improvements here and startups are going to struggle to beat them on deploying capital to something like a TPU. However, there are a lot of high-risk approaches that could be worth exploring for startups like analog computation. Another example is what Rain is doing with reservoir computing.

One other angle that I’d love to learn more about: any companies out there that are enabling the building of model architecture-specific ASICs at very low cost. We’re seeing more and more companies creating custom compute for their AI products (Tesla, Google, Facebook, etc.) but given the cost of developing these chips, it’s mostly been restricted to the big companies. Does someone have a method that will allow such chips to be built in a simple way at low cost? What does an FPGA for AI look like? Can we make special purpose quantum processors that are feasible long before full quantum computing that run ONE specific class of algorithms/architectures in a high leverage area? This would represent an alternative to general-purpose AI compute where we see many gains from generalized components of neural network architectures like quantization and faster matrix multiplication. This could be particularly worthwhile on edge devices or IoT applications where power is also a factor and it’s important for inference to happen on-device for privacy and security reasons.

Filtering Information and Enterprise AI

We live in a world that is increasingly saturated with information and stimulation for our brains. It’s no secret that this isn’t healthy and I would argue it’s making it hard for us to live and be productive. The last generation of products made the world’s information instantly available and shareable with friends/coworkers (i.e. Google, Bloomberg Terminal, Slack, Facebook, etc.). As a directional arrow of progress, the next generation will instead help us filter this information overload. This framework will apply to everything in our lives: AI will be an assistive technology that helps us know only what we need to know when we need to know it.

The big opportunity starts in the enterprise space where the problem is simple: making sure the right people have the right information in a company is a tremendous source of value. Given how much shit gets dumped in Slack, wikis, email, etc., there’s a real opportunity to better figure out how to share, index, and retrieve information. Any solution will have to include personalized context-sensitive search across the whole range of apps, devices, and datasets that capture the entirety of someone’s work environment. Anything from recording and indexing meeting notes to figuring out how to keep a more dynamic wiki up to date feels like a massive opportunity for productivity gains along these lines.

I am also intrigued by this trend applied to consumers, and in particular, any solution that can pare down the amount of low information density, low retention reading I do, but still keep me in the know on content that is relevant to me. I don’t want to have to scroll through Twitter frequently throughout the day to get this information. This has been tried before, but it’s still a huge problem. I like approaches like the one my friend Zach Hamed took with Macaw, but this could probably be built into a learning system based on user feedback loops. This would represent the evolution of Silicon Valley‘s content paradigms to a much healthier place.

Oh, and, as I mentioned on Twitter, I’m waiting for the company that is building “Google for your brain.” If someone has any seemingly plausible way to index the content in our brains and make it searchable, please call me right now.

Thanks to Ellen Rudolph, Parthi Loganathan, Bruno Faviero, and my dad for feedback on this post.

This was originally posted on Medium on April 25th, 2019 and reposted here.

Previous
Previous

On Risk and Imagining the Future

Next
Next

Using AI to Scale the World’s Best Healthcare to Every Human Being