Crypto

DePIN's Imperfect Present & Promising Future: A Deep Dive

0
100
An in-depth look at the emerging Decentralized Physical Infrastructure space.

This piece was written by Knower & Smac

At this stage, everyone in crypto is familiar with the concept of Decentralized Physical Infrastructure Networks. They represent a potential paradigm shift in how critical infrastructure is built, maintained and monetized. At its core, DePIN leverages blockchains and decentralized networks to create, manage and scale physical infrastructure without relying on any centralized entities. This approach introduces a new era of openness, transparency, community-driven growth & ownership and aligns incentives across all participants. 

While these ideals are obviously important, to see this type of network scale to its true potential they need to build compelling products and solve meaningful problems. The real significance of DePIN rests in its potential to disrupt traditional models that are often plagued by high costs and inefficiencies. We’re all too familiar with slow-to-innovate centralized parties that are often defined by monopolistic or oligopolistic practices. DePIN can flip this on its head. The end result should be defined by more resilient and adaptable infrastructure that can quickly respond to changing demands and technological advances. 

Though we are loath to develop market maps as they are often too backward-looking for the stage of investing we do at Compound, in this case we’ve found a lot of existing research overcomplicates the state of this vertical. To us, at the highest level we view DePIN through the lens of six distinct sub sectors (click each to be taken to their given section):

One of the common criticisms of crypto is a cry for more ~real use-cases~. 

Candidly, this is a tired and uninformed argument but one that is likely to persist regardless. Especially in the West – where existing use-cases for crypto are largely taken for granted – we would all benefit from showcasing easier-to-grok examples for the potential of this technology. DePIN is unique in this regard as it’s one of the best examples of crypto incentives creating real-world utility, enabling individuals to build and participate in networks that weren’t previously possible. Most crypto categories are dependent exclusively on the success of software, whereas DePIN places a heightened emphasis on tangible hardware you can see and feel in the real world. Narratives and story-telling are an important part of every technology wave, and like it or not, as an industry we need to do a better job telling the DePIN story.

Despite a handful of sector leaders – Helium, Hivemapper, and Livepeer for example – there are still plenty of unanswered questions for DePIN as it matures. The core value prop is to upend the traditional models of infrastructure provision and management. By implementing crypto incentives, DePIN can enable:

  • Greater resource utilization
  • Better transparency
  • Democratized build-out and ownership of infra
  • Fewer single points of failure
  • More efficiency

All of this should, in theory, lead to more resilient real-world infrastructure.

We believe another of the core value propositions of DePIN is its ability to flip an established business model on its head specifically through crypto economic models. There will always be an opportunity to look at specific DePIN projects and cynically view them as simply weaker businesses with contrived token incentives. But this report will highlight that in some cases, the introduction of blockchains actually dramatically improves an existing model, or in some cases introduces a completely new one. 

While we have views on it, we mostly neglect the open question of where to build these networks over the long-term: Solana has become the shelling point for DePIN but tradeoffs exist regardless of the base layer.

As with every type of network, there are two main areas of focus: the demand side and the supply side. Suffice it to say, the demand side is (almost) always the more difficult of the two to prove out. The simplest explanation for this is that the token incentive model maps easiest to the supply side. If you’re already driving a car or have some underutilized GPUs, it’s very easy for you to add a dash-cam or offer out your idle compute. There’s little friction on this side.

But when it comes to demand, the product or platform needs to actually deliver real value to paying customers. Otherwise, demand never materializes or manifests itself as mercenary capital. Taking some practical examples, the demand side for Helium Mobile is individuals looking for a better cellular plan. In the context of Hivemapper, the supply side is individuals earning tokens for providing detailed mapping data. Importantly, these are both easy to understand.

You can’t discuss supply and demand without expanding on a core piece underpinning all of this DePIN activity: token incentives. In a report from this summer, 1kx outlined the cost structures for a variety of DePIN projects and examined the sustainability of these systems. The key takeaway is that to align reward distributions with operational costs & demand growth is difficult, let alone create a generalized model for every DePIN project. Generalizing doesn’t work, especially within a vertical that touches such disparate corners of the real world economy. The complexities of the markets these networks hope to reshape are what make them both exciting and challenging.

One point we think is overlooked a lot of the time is what network effects actually mean.

There are a few differences in the models, but for the most part, a DePIN project's cost structure comes from a) determining how much it costs a node operator to participate, b) determining the efficiency of a network’s nodes, and c) examining differences between projects’ accounting mechanisms. 

The report is absolutely worth a read, but there’s a major takeaway from this: you can’t generalize tokenomics and make assumptions about one project based on the failures of another to properly sustain token incentives. Additionally, you can’t write off DePIN as a sector due to tokenomics alone. Looking at DeFi in particular, there has yet to be a single example of a sustainable tokenomics model that rewards new entrants and sustainably incentivizes previous deposits. The reduction in CapEx provided by the traditional DePIN model makes it not only easier to spin up a network and run a business from day one, but is competitive against traditional ways of building out these systems (which we’ll explore throughout the piece). 

One point we think is overlooked a lot of the time is what network effects actually mean. We often talk about these as networks, and in some cases conflate this with the principle of network effects. In an ideal world, a DePIN network is bootstrapping one side of the network (likely supply) and then demand begins to ramp such that more supply comes online to meet it. That becomes an iterative cycle where the value of the network expands exponentially. The presence of a two-sided marketplace alone does not mean you have network effects.

Ok fair, but are there specific traits or properties of existing businesses that make DePIN an attractive model for disruption? We would argue that DePIN specifically excels when at least one of the following is true:

  • Scaling infrastructure for a single provider is costly or cumbersome
  • There’s opportunity to create greater efficiency matching supply & demand
  • You can accelerate a cheaper end-state as previously underutilized assets are brought closer to full capacity

To give you an idea of structure, we’ll divide this report into the six distinct categories mentioned earlier. Each section will cover the core idea or problem, how existing teams are tackling it, and our views on the durability of the sector and what open questions still exist. If you want to skip to our own internal ideas, at the end of the report we explore some novel ideas for DePIN companies we’d love to partner with and build together.

Telecom and Connectivity

The modern telecommunications industry developed predominantly in the 1990s, as wired technology was quickly being replaced by wireless. Cell phones, wireless computer networks and the wireless internet were just crossing over into broader retail adoption. Today that telecom industry is vast and complicated, managing everything from satellites to cable distribution to wireless carriers to sensitive communications infrastructure. 

Everyone is familiar with some of the largest companies in this space: AT&T, China Mobile, Comcast, Deutsche Telekom, and Verizon. From a scale perspective, the traditional wireless industry generates over $1.5 trillion of global annual revenue across three main sectors: mobile, fixed broadband, and WiFi. Let’s quickly differentiate these three:

  • The mobile industry forms and maintains person-to-person connections; anytime you make a phone call, you’re using mobile infrastructure.
  • The fixed broadband industry refers to high-speed internet services delivered to homes and businesses through a fixed connection (usually through cable, fiber, DSL or satellite) and offers more stable, faster and in some cases unlimited data connections.
  • The WiFi industry manages the most widely-used connectivity protocol, enabling everyone to access the internet and communicate with each other.

The EV3 guys shared a post ~ 2 years ago detailing the state of existing telecom companies. 

“With $265B worth of productive physical assets (radios, base stations, towers), telcos generate $315B of annual service revenues. In other words, productive asset turnover is 1.2x. Not bad!” 

But these companies need hundreds of thousands of employees and billions of dollars worth of resources to continue managing this growing infrastructure. We’d encourage you to read the post in full as it goes on to explore how these corporations pay taxes, update infrastructure, manage licenses and ultimately lobby to keep operations running. It’s an unruly and unsustainable model.

More recently, Citrini Research highlighted some of the looming issues for incumbent telecom giants. They detail the financial situation that a bunch of these companies find themselves in and paint a concerning picture. That picture is one in which overly optimistic projections (attributable to the pandemic) have left many businesses holding the bag on supply that’s not easily deployable. Specifically, this includes a not-insignificant amount of balance sheets overweight fiber optic cables they acquired after the fiber-to-the-home (FTTH) personal connectivity boom. The problem is there is nowhere to offload this supply. Citrini goes on to point out the shift in demand from personal connectivity to expanded access across “shared campus and metro area networks to deploy inventory and support a new tech boom”.

Telecom companies will be unable to roll out new supply quick enough and demand is increasing to a point where it’s more difficult than ever to keep pace – DePIN can fill this supply gap. This creates a unique opportunity for decentralized wireless projects to simultaneously scale network growth faster than incumbents while meeting the growing demands for wired and wireless infrastructure. Telecom companies will be unable to roll out new supply quick enough and demand is increasing to a point where it’s more difficult than ever to keep pace – DePIN can fill this supply gap. Citrini specifically calls out the need for fiber optic networks and WiFi hotspots, and while decentralized deployment of fiber optic nodes has yet to be explored in depth, the implementation of WiFi hotspots is feasible today and already underway.

We noted the three pillars of wireless earlier, now let’s run through each of these in more depth and compare their decentralized models to the existing telecom industry today.

Mobile Wireless

Mobile wireless is perhaps the most well-known subsector of Decentralized Wireless (DeWi) and in this context refers to decentralized networks aiming to provide cellular connectivity (i.e. 4G & 5G) through a distributed network of nodes. Today, the traditional mobile wireless industry relies on a network of cell towers that provide coverage to specific geographic areas. Each tower communicates with mobile devices using radio frequencies and connects to the broader network through backhaul infrastructure (i.e. fiber). The core network handles all the switching, routing and data services – a centralized provider connects cell towers to the internet and other networks.

We know the infrastructure buildout is capital-intensive and that deploying 5G requires denser networks with more towers, especially in urban areas. Rural and remote areas typically lack coverage because the ROI is lower given the comparatively smaller potential subscriber count, which makes extending coverage to these areas a challenge.

While the actual buildout of infrastructure is expensive, this also introduces a need for ongoing maintenance. Beyond repairs, software updates and upgrades to support new technology (5G for example), in high-density areas the problem of network congestion can degrade service quality, requiring even more ongoing investment in network optimization and capacity upgrades. Incidentally, this was something that these operators should have seen coming long ago.

The prospect of a data-driven market with uncapped growth potential saw spectrum costs increase from $25-30 billion for 2G to $100 billion for 3G. Capital costs rose to enhance the core infrastructure and expand mobile networks to cover more than half the world’s population with 3G back in 2010. Though 3G led to a rapid increase in subscribers, the rising costs began outpacing revenue growth. While Blackberry was the defining device of the 3G era, the iPhone was clearly the device that both shaped and ruled 4G. 

These two pieces of technology (4G and the iPhone) brought the internet to handheld devices and data usage went completely vertical from an average of less than 50 megabytes (MB) per month per handset in 2010 to 4 gigabytes (GB) by the end of the 4G era. The problem is that while revenues from data grew, it wasn’t nearly enough to offset the steep drop in revenues from traditional voice service and text (~35% annually from 2010-2015 alone). On top of that, operators had to spend greater than $1.6 trillion on spectrum, core network upgrades and the expansion of infrastructure to meet never-ending demand for network capacity & coverage.

The decentralized wireless vertical is most commonly associated with Helium. Rightfully so in our view. Helium was founded over a decade ago with an original vision to create a decentralized wireless network for IoT. The goal was to build a global network that allowed low-power devices to connect to the internet wirelessly, enabling a wide range of applications. While the network grew, the IoT market itself revealed limitations in scale and economic potential. In response Helium expanded into mobile telecom, leveraging its existing infrastructure while targeting the more lucrative and data-intensive market. Eventually Helium announced the launch of Helium Mobile, a new initiative aimed at building out that decentralized mobile network – the move still clearly aligns strategically with Helium’s broader vision of creating a decentralized wireless ecosystem. Today, there are over 1 million hotspots within Helium’s network of coverage and over 108k subscribers to their mobile network.

Helium’s IoT platform is dedicated to connecting low-power, low-range devices to power niche networks like smart cities or environmental monitoring. This platform is powered by the LoRaWAN (low power wide area networking) protocol designed to connect these battery operated devices to the internet in either regional, national or global networks. The LoRaWAN standard usually targets bi-directional communication, which allows two or more parties to communicate back and forth in both directions (sending and receiving).

The IoT network is composed of devices like vehicles and appliances that are connected within a network through sensors and software, letting them communicate, manage, and store data. Helium’s IoT platform was formed around the idea that applications in the world would need extensive coverage with low data rates. And while individuals can spin up hotspots anywhere in the world at relatively low cost, most appliances only need to send data sparingly, which makes the cost of running traditional infrastructure capital inefficient. 

By combining LoRaWAN with the decentralized network built by Helium, it was now possible to lower capital expenditures and expand the reach of the network. There are over sixteen different hotspot types approved by Helium for operation, each reasonably affordable.

The Helium Mobile initiative was created as an alternative to traditional wireless providers. Initially the team has created a decentralized mobile network that can coexist with traditional cell networks, introducing a mobile virtual network operator model (MVNO) whereby Helium can partner with existing carriers to provide seamless coverage while leveraging its network for additional capacity and lower costs. The hybrid approach lets Helium offer competitive mobile services while expanding its decentralized network in tandem. Helium Mobile is compatible with existing 5G infrastructure to bring a low-cost platform for smartphones, tablets, and other mobile devices that require high-speed connectivity.

Verizon and AT&T each have over 110 million subscribers and have average monthly plans of $60-90 for individuals and $100-160 for families. Helium Mobile by contrast offers a $20/mo unlimited plan. How is this possible?

It comes back to one of the core advantages of DePIN — drastically reducing capital expenditures. A traditional telecom company needs to build out all of the infrastructure themselves and service ongoing maintenance. In turn, these operators pass on some of those costs to customers in the form of higher monthly charges. By introducing token incentives, teams like Helium can solve the bootstrapping problem while offloading capex to its network of hotspot managers. 

One of the points we noted in the introduction of this report was that ultimately these DePIN teams would need to provide a product and service that has real demand. In the case of Helium, we are seeing them make meaningful inroads with the largest mobile carriers in the world. A relatively recent collaboration with Telefonica extends Telefonica’s coverage and allows offloading of mobile data to the Helium Network. 

Specifically as it relates to offload, some back of the envelope math suggests there’s meaningful revenue to be had from mobile offload alone. Assuming mobile users consume ~17 gigabytes of data per month, and Helium’s carrier offload services take in ~5% of data usage from large carriers, that’s upwards of $50 million in revenue alone. There are obviously a lot of assumptions here, but we’re in the earliest stages of these agreements and should customer conversion rates or offload take-rates come in higher, these revenues could come in materially higher.

Helium is a great example of the power that comes from decentralized networks at scale. Ten years ago something like this wouldn’t have been possible. Bitcoin was still very niche and it was difficult to get people to set up a Bitcoin mining rig at home, let alone a hotspot for a very nascent sector within crypto. Helium’s current and future success is a net-positive for the entire space, assuming they can manage to gain market traction relative to their traditional counterparts. 

From here, Helium will focus on expanding its coverage, and has the unique ability to target growth in areas where it most frequently sees coverage drop to T-Mobile. This is one of the other nuanced benefits to token models – as Helium collects data on where it most commonly sees coverage drop, it can then direct incentives to those closest to those areas to quickly target high-density but uncovered regions. 

There are others in the DePIN mobile space working on their own innovative solutions: Karrier One and Really are two examples.

Karrier One calls itself the world’s first carrier-grade, decentralized 5G network, combining traditional telecom infra with blockchain technology. Karrier’s approach is very similar to Helium, where individuals within the network can set up nodes — cellular radios similar to Blinq Network’s PC-400 and PC-400i (run on top of Sui). 

Karrier’s hardware is quite similar to Helium’s, but their software and GTM diverge. Where Helium wants to cover as much of the world as possible, Karrier is initially focused on underserved or remote areas. Their software is capable of leveraging phone numbers connected to mobile devices’ SIMs to send and receive payments, transcending banks. 

In their own words, Karrier enables the creation of “one virtual mobile number for all your web3 notifications, payments, logins, permissions, and more,” calling it KarrierKNS. This makes Karrier potentially better-suited for communities or locations without robust banking infrastructure, while Helium might be a better fit for more developed locations with individuals looking to reduce the cost of their cellular plan.

Karrier’s network architecture consists of Foundational, Gatekeeper, and Operational nodes. Foundational nodes manage authentication and blockchain maintenance, Gatekeeper nodes handle wireless access to end users, and Operational nodes provide traditional telecommunications modules. All of this runs on top of Sui smart contracts, with the Karrier One DAO (KONE DAO) managing internal reviewal processes. Their usage of blockchain tech revolves around the following principles:

  • Smart contracts are superior to authoritative and bureaucratic processes
  • Blockchains let users gain power and privacy over their data while still maintaining transparency
  • Tokenomics of the Karrier One network pave the way for shared success between network participants

Karrier is obviously in its infancy relative to Helium but there’s room for more than one DeWi protocol to succeed. How many exactly remains to be seen, though we frequently see MVNOs change hands in the traditional telecom market at low-10 figure valuations. There’s also clearly some level of oligopoly in telecom with Verizon and AT&T. Where Karrier might succeed is in banking the unbanked with its KarrierKNS initiative, while Helium chips away at traditional telecom market share. One relevant note here is Karrier’s emphasis on future-proofing its network to accommodate 5G, edge computing and further technology advances. The technical capabilities of what this means in practice are beyond the scope of this report but suffice it to say that’s an interesting positioning at a time when traditional telecom is skittish to invest too heavily in 6G.

Really is the world’s first fully-encrypted and private wireless carrier, where individuals participate in the network through running small cell radios across the globe. Really’s main focus is ensuring all data transmitted through its network remains private — customers with traditional cell plans do not own their data, making Really an early example of a user-first telecom project.

Really provides some rather shocking data suggesting 1 in 4 Americans have been affected by cybercrime. The scale of customer loss is staggering and only likely to compound over time. As more of our physical devices integrate software (smart fridges, smart cars, smart locks, in-home robots), the surface area for attack grows.

On top of that, anonymity is rarely preserved in the modern internet. It’s increasingly difficult to sign-up for a new account anywhere without revealing personal information. Most companies are notorious for absorbing huge swaths of user data and ingesting more of this data will only increase thanks to AI.

Traditional telecom requires large cell towers that naturally have geographic gaps in coverage. Really makes up for this by building out the infrastructure for these smaller “cell towers” hosted in the homes of its network participants.

Their mobile plan offers customers full ID protection and monitoring, a security suite developed in-house (anti-worms, anti-ransomware), along with SIM swap protection and insurance services. Really offers bespoke mobile plans aimed at protecting its customers, first and foremost. Its current limitless plan is a $129/month plan with unlimited data, talk, and text, unlimited calls to over 175+ countries, full encryption, and VIP services through its Really VIP program. This borrows a business model similar to concierge medicine as more consumers look for bespoke services they can’t get from traditional providers.

This is an admittedly unique approach to the DePIN mobile sector, as it differentiates on features and privacy as opposed to cost. It’s obviously still too early to extrapolate much, but as a general line of thinking — differentiating on a set of underlying beliefs that aren’t widely accepted (in this case the importance of privacy & encryption) is one approach to carving out space in a competitive market.

It’s straightforward to make a case for mobile wireless being the most promising sector in DePIN. Smartphones are ubiquitous. The industry also benefits from low switching costs due to eSIMs, where users don’t need to access a physical SIM card anymore. This has made it so users can store access to multiple carriers from one eSIM, making it easier for those that might frequently travel. This should only continue. Assuming it persists, the case for mobile DePIN capitalizing on and potentially enabling new unlocks specifically from eSIM advancement strengthens.

While smartphones and other mobile devices are powerful sensing machines, they aren’t without a unique set of challenges:

  • Always-on data collection comes with privacy concerns and device power-draw concerns
  • Opt-in data collection typically has poor retention rates
  • Data mining attracts users that skew data sets in unexpected ways
  • Airdrop farmers make it hard to differentiate organic vs inorganic traction
  • Defensibility is vulnerable to a race-to-the-bottom on token rewards

These aren’t unsolvable problems. For one, the possibility of “opt-out” functionality or total data deletion could be attractive for users that might otherwise be skeptical of having their data captured. Even just the ability to opt-out is a powerful trust signal. We’re also seeing large-scale attacks on inorganic airdrop farmers, with LayerZero being the most obvious and public manifestation of this.

It’s important to note that mobile wireless possesses the highest per-GB revenue of any sector within the telecommunications industry. Mobile devices need constant connectivity, which translates to strong pricing power from providers relative to fixed or WiFi connectivity ones. Additionally, consistent improvements to connectivity standards (i.e. 5G) allow for higher speeds and therefore higher performance which only add to the pricing power advantage.

What comes after 5G though? That’s right…6G

6G promises ~instantaneous~ communication between existing smart devices and the proliferation of new wearable devices entering the market. It may be silly to speculate on today but it’s essentially just a more performant, hypothetical version of 5G. Funnily enough, one of the outstanding questions as it relates to 6G is the social coordination and consensus from existing stakeholders. It’s unclear today there’s any consensus on the goals, characteristics and requirements of 6G among the telecom vendors, governments, MNOs, semiconductor manufacturers and device makers. The chart below – admittedly it’s intense – shows just one main area of commonality: widespread skepticism about how 6G will drive value on top of existing infrastructure.

Understanding where these stakeholders diverge, and which groups are most likely to “win” when there is contention will be critical to understanding how this industry develops. It’s part of why we think find DePIN particularly appealing, as we see non-crypto builders who have long careers in other technology industries bringing that expertise to this space to solve problems. Admittedly it takes much more work and research to dig through these dynamics and develop internal views but the prize for winning in markets this large are extraordinary.

Fixed Wireless

As the name suggests, fixed wireless refers to high-speed internet services delivered through a fixed connection. This is typically through cable, fiber, DSL or satellite technologies and compared to mobile, fixed broadband offers more stable and faster data connections. We can break this down further into the following:

  • Fiber optics
    • Transmits data using light signals through glass or plastic fibers
    • Can deliver speeds up to 1Gbps or more with extremely low latency
    • Ideal for activities like online gaming, video conferencing and cloud-based apps
    • Gold standard for fixed broadband and the most future-proof
    • High cost of laying fiber infrastructure, especially in less densely populated areas
  • Cables
    • Uses coaxial cables (originally laid for television) to deliver internet
    • Modern systems usually combine coaxial with fiber backbones for a hybrid fiber-coaxial (HFC) system
    • Speeds up to 1Gbps though actual performance varies especially during peak usage times
    • Low latency but not as low as fiber with performance suffering during high-traffic periods
    • Cable technology is evolving with the development of DOCSIS 4.0, which can theoretically provide speeds up to 10Gbps bringing cable closer to fiber-like performance
  • DSL
    • Deliver internet over traditional copper telephone lines
    • One of the oldest forms of broadband still in use
    • Speeds typically range from 5-100 Mbps with high latency
    • Increasingly outdated
  • Satellites
    • Deliver internet via satellites orbiting the Earth
    • Traditionally used geostationary satellites at high altitudes but newer systems like low-Earth orbit (LEO) satellites are transforming this space
    • LEO speeds are 50-250 Mbps with latency around 20-40ms
    • While geostationary systems suffer from high latency given the long distance data travels, LEO is significantly reducing this
    • Global coverage
    • Growth prospects likely in areas where terrestrial infrastructure is lacking or difficult to reach
    • More likely to play a complementary role in urban and suburban areas where fiber and cable are available

Fixed broadband has the advantage of stickier customers and higher retention rates today, given the challenge of switching providers (i.e. installation costs & downtime) and the inherent bundling that happens (cable broadband).

On the surface this seems promising but there are a few questionable assumptions baked in here. The most feasible and highest impact threat is wireless 5G and future 6G tech – with multi-gigabit speeds and low latency, these can offer comparable performance to fiber broadband but with an added advantage of mobility and ease of deployment. There’s a very conceivable world in which we see cord-cutting from fixed services and a rapid erosion of the customer base for broadband particularly in urban and suburban areas. 

The other elephant in the room (though likely a longer-term impact) is LEO satellite broadband systems. We all know about Starlink and Project Kuiper but in practice these are much more expensive today and aren’t close to performance parity. But the data caps will be removed eventually and costs will compress, making these a worthwhile challenger to fixed broadband dominance over a long enough time horizon.

Ignoring these two acute threats could leave those building in this space susceptible to the Thanksgiving turkey chart for their user base (specifically fiber, cable and DSL broadband).

Fixed internet comes at a cost, and revenues per/GB are 10x lower than those of mobile wireless - so how do these companies make their money? The core business model of fixed internet comes through monthly subscription packages, tiered services, and the targeting of different enterprises. These companies are able to charge for things like custom solutions for businesses (think of hedge funds fighting for better positioning to underground cables), more performant internet speed at a higher cost, and bundling of services (cable, internet, and phone). It should come as no surprise to hear that the major providers here in the US are Comcast (Xfinity), Charter (Spectrum), AT&T, Verizon and CenturyLink. Internationally it’s companies like Vodafone, BT Group and Deutsche Telekom. While Comcast and Charter control a majority of the US fixed broadband market today, we’re seeing fiber leaders like AT&T and Verizon rapidly eat into market share through aggressive FTTH rollouts.

Fixed internet uses a technology called Point-to-Multipoint (PtMP) where a single central node (base station) connects to multiple end nodes (subscribers) over a shared communication medium. In fixed wireless this means a central antenna transmitting signals to multiple receivers. One of the primary challenges of early PtMP was interference from other devices operating in the same frequency bands, which led to signal degradation and poor performance. The other obstacle is limited bandwidth given the shared nature of PtMP architecture, leading to reduced speeds as more users are connected to the network.

Today, there’s been improved efficiency of PtMP systems owing to the adoption of more advanced modulation schemes (OFDM, MIMO) and the use of higher frequency bands (mmWave) which allow them to achieve greater bandwidth and therefore support more users. It’s worth noting that these mmWave signals are still highly susceptible to something called attenuation – which effectively means environmental factors like rain, foliage and physical obstructions can significantly degrade performance. The fixed internet sector is still evolving and changing, making room for DePIN upstarts to come in with their own unique solutions. 

Andrena and Althea are two of the more widely recognized companies building in the fixed wireless sector of DePIN. Andrena is a company offering high-speed internet services to multi-dwelling units (MDUs) at competitive prices, with a unique GTM built around minimizing installation requirements for customers. As mentioned above, the fixed internet customer base is stickier due to switching costs and difficulties that come from migrating actual physical infrastructure.

The technology Andrena is using to power its network is known as Fixed Wireless Access (or FWA) which is a method of delivering broadband internet services using wireless technology. Instead of cables, FWA uses radio signals transmitted from a base station to a receiver. This is far less expensive to deploy and can expand faster than wired fixed broadband, with the tradeoff that FWA is more susceptible to interference, environmental impact and performance degradation.

Andrena is deploying antennas on top of rooftops that target a wide-ranging area — apartment building rooftops, office buildings, and other high-density locations. Through this deployment, Andrena can charge $25/month for 100 mbps with its basic plan or $40/month for 200 mbps — fairly competitive pricing that’s ~30% cheaper on average than Verizon or AT&T. The company is establishing partnerships with real estate companies and property owners, introducing a revenue-share model to deploy their antennas into the public faster. You can see how this might make some sense fundamentally as property owners offer residents high-speed internet as an amenity, while earning a portion of the revenue produced thanks to Andrena. Primary initial markets for Andrena include New York, Florida, New Jersey, Pennsylvania, and Connecticut.

More recently, Andrena announced Dawn, a decentralized protocol. Prior to Dawn, the company functioned like a traditional business with a public facing narrative of decentralized principles. Dawn is Andrena’s attempt to connect users with a chrome extension onboarding process that would in theory connect buyers and sellers of internet power, enabling individuals to become their own internet provider. The obvious open question here will be how value accrues – does it accrue to the Labs or equivalent entity, or does it actually flow back to a token?

Dylan Bane of Messari recently wrote about Dawn in a very comprehensive deep dive, outlining some of the main issues of the broadband sector and some of the major trade offs. Admittedly, they read pretty similarly to the blockchain trilemma’s balance between performance, security, and scalability - except with broadband, the trilemma hinges on the methods used to deliver services. Coaxial cable, fiber-optic cable, DSL, and satellites are all cited as existing within one of these quadrants with a combo of “best performance, but worst scalability” or “worst security, but best performance” and so on. 

The one important thing to note is that this depicts a static point in time – today – while we are primarily focused on peaking around the corner into the future. It’s not enough to understand where these technologies reside currently but rather how realistic are the paths for each of these to move toward the top right corner of this matrix. That will dictate startup formation, strategy and ultimately success. 

The other point we’d push back on here is the idea that fiber-optic cable belongs at the bottom of scalability – yes it’s true that it’s expensive to lay fiber, but this also ignores the work already done here. The most densely populated areas have already been outfitted with this fiber and so, while it’s unlikely fiber can scale to net new areas, it has a tight grasp on the urban and suburban landscape.

Dawn’s system relies on a new system known as medallions, which are basically a unit of account that can be staked for a 12% share of a high-value region’s on-chain revenue. Medallions function as gateways into the Dawn network – assuming decentralized broadband takes off and one day flips incumbents, how much would a 12% stake of revenue pay out to medallion holders? The initial rollout is said to target over three million households and gives Dawn an ARR of over $1 million, thanks to its relationship with Andrena’s existing customer base.

Althea

Althea is a bit different and positions itself as a payments layer powering infrastructure and connectivity; it’s an L1 that offers dedicated blockspace for subsequent L2s, machine-to-machine micropayment services, contract-secure revenue, and a hardware-agnostic approach to development. Ok that’s a mouthful…what does this actually mean and how is it at all related to fixed wireless?

Althea is effectively a protocol that allows anyone to install equipment, participate in the decentralized ISP and earn tokens for doing so. The idea is that users sell bandwidth to each other and cut out the middleman of traditional fixed internet businesses. The way Althea attempts to do this is through specialized intermediary nodes that directly connect the network to the broader internet through internet exchanges or business-level connection through traditional ISPs. 

Ok well what are internet exchanges (IXPs)? 

They’re physical infrastructure that lets multiple ISPs connect and exchange traffic directly. Instead of routing through a third-party network, participants in an IXP can exchange traffic directly which is more cost-efficient. The core principle of an IXP is to enable “peering” agreements between networks:

  • Public peering – happens over the shared infrastructure of the IXP; multiple networks connect to a common switch allowing for broad traffic exchange among a bunch of participants
  • Private peering – happens through a direct, dedicated link between two networks within the same data center or IXP facility; typically used when lots of traffic is exchanged between two specific networks

IXPs often provide lower latency and higher speeds compared to traditional ISP routing because of the direct connections between networks. Reducing the number of intermediary hops means data packets reach their destination faster. The reason most retail consumers get their internet through an ISP rather than directly from an IXP mostly comes down to last mile connectivity, consumer equipment and structural design.

One of the benefits of DePIN is its ability to reduce costs at the start of operations while still extending this principle throughout the life cycle of a project. With Althea, nodes are able to dynamically adjust costs based on verifiable supply and demand, making it a source of truth for the market on fixed internet services.

The fixed wireless space seems to have the largest range of outcomes depending on the development of mobile wireless. On the one hand, there’s a world in which 5G and 6G don’t reach performance parity with fixed broadband over the mid-term and it maintains its position. On the other hand, there’s a world in which 5G, 6G and LEO satellite broadband completely displace it entirely much quicker than anyone anticipates. Betting against the rapid advancement of technology is generally a losing bet, but there are still plenty of questions surrounding mobile wireless development that you can credibly make an argument for alternative fixed wireless solutions.

WiFi

The last of the DeWi subsectors is WiFi, which although ubiquitous, doesn’t generate a substantial share of the wireless services revenue market. That’s mostly because it’s difficult to monetize. Consumers have come to expect free WiFi everywhere and so telecom companies like AT&T, Comcast and Spectrum operate large public WiFi networks as a way to offload mobile data traffic or enhance customer loyalty.

On the enterprise side, these contracts are usually long-term ones with WiFi vendors, which aid retention numbers. Candidly, this feels like the least attractive subsector to build in: there’s already abundant, inexpensive supply everywhere. In the more difficult to reach geographies, the willingness and ability to spend is immaterial which makes it difficult to vampire-attack traditional providers here.

Obviously WiFi is the most widely-used connectivity protocol globally and can connect the largest variety of devices. Almost everything in our homes can connect to WiFi today, and we’ll have even more smart devices in the future. In some respects, it’s the underlying language of the internet that powers all of our interactions. But that doesn’t necessarily make it the most interesting future space for disruption and value capture.

Incidentally this might explain the more limited number of operators building here. The most notable WiFi DePIN teams include Dabba and WiCrypt.

Dabba

Dabba is building out a decentralized connectivity network with a focus on India. Their strategy thus far has prioritized the deployment of lower-cost hotspots, similar to Helium, but with a focus on WiFi – over 14,000 hotspots have been released and over 384 terabytes of data have been consumed on the network. 

Their thesis centers around India’s economic growth and accommodating for its massive population. The country needs more performant infrastructure capable of competing with centralized providers on uptime and accessibility. Instead of simply deploying as many hotspots as they could, the team initially set out to send these devices into areas of higher data demand, resulting in greater word of mouth marketing for the project to bootstrap from the beginning. 

In a blog post from late last year, the team shared an overview of India’s telecom industry: despite a population over 1.4 billion, only 33 million were broadband service subscribers and just 10 million were 5G network subscribers.

Today, Dabba’s priorities lie in expanding their network and reducing the friction to join. Dabba believes that while 4G and 5G connections are necessary, businesses and households in India require fiber to facilitate connectivity between these mobile and fixed broadband networks. Over time, the goal is to build the largest and most decentralized network that can power India and deliver this at lower costs.

The network architecture is pretty straightforward: stakeholders in the Dabba network are hotspot owners, data consumers, and the local cable operators (LCOs) that manage the relationships between suppliers and consumers. Their strategy has shifted to targeting some of the more rural areas of India through their LCO network, after initially deploying in densely populated cities with higher initial demands for WiFi. 

WiCrypt

Another project building out decentralized WiFi infrastructure is WiCrypt – they hope to enable anyone to become an internet service provider through dynamic cost structures and their global network of hotspots. Once again, this looks a bit like what Dabba is building, and their primary focus is on enabling individuals or businesses to share WiFi bandwidth with others for a fee. The WiCrypt explorer lists a total of over $290,000 in rewards distributed so far and a respectable network coverage across Europe, Asia, Africa, and the United States.

Their network size is admittedly much smaller than anything we’ve observed so far. In the WiCrypt whitepaper, a few major problem areas are specifically noted:

  • Countries censoring internet content
  • ISP oligopoly represents an industry where nine corporations make up over 95% of revenues.
  • Abusive policy and overreach by existing ISPs

These are valuable talking points and highlight one of the main issues that led to decentralized wireless networks -- consumers are stuck with an existing set of sub-par options and often unable to switch without significant pain. WiCrypt wants to enable anyone to become ISP sub-retailers, letting them dictate more favorable terms outside of the incumbents’ infrastructure and terms of business. The business model here seems tough given the ubiquity of public WiFi today. It’s also difficult to imagine the demand side (i.e. those who need internet access) is large enough from a capital perspective to make it worth the supply side’s effort.

WiCrypt’s architecture includes hardware (hotspots), a mobile app (for ease of use), routers, cloud servers, and firmware. Routers batch all incoming transactions and post them in batches to the blockchain, which then interact with Linux-based firmware to manage “traffic control” and router activity. From here, users interact with the WiCrypt mobile app to handle tasks like WNT token payments, authentication, data management, or other activities relevant to the marketplace. The cloud servers connect with WiCrypt’s smart contracts and confirm data attestations, letting them rely on these servers as a second method of confirming information posted on-chain.

Decentralized wireless deservedly draws a lot of attention given it stands to benefit immensely if successful. The telecom industry powers the world and how we communicate for work, in our personal lives, and in our free time. Decentralizing this tech stack is a daunting task, but one worth pursuing given the uncertainty of what traditional telecom looks like going forward. Add to that the size of the prize for teams who can successfully execute in this vertical and it’s clear we’ll continue to see stiff competition here.

The market’s appreciation for Helium’s success in recent months is a testament to builders ignoring how the market perceives them and continuing to drive value to the network. Over time this value creation cannot be ignored. It also shows a willingness from incumbents (like T-Mobile and Telefonica) to embrace this paradigm shift. 

If you think the telecom industry has inefficient, outdated and poorly managed infrastructure, just wait for this next section: decentralizing the energy grid.

Energy

We’ve written extensively about distributed energy. First, back in early 2023 with our initial release of A Crypto Future. Then again more specifically as part of our public theses database. While there were very few people thinking about this space at all back then, it seems distributed energy is becoming much more in vogue now.

Energy remains one of the most highly regulated monopolies in the US & one particularly well-suited for DePIN. The energy market is extremely complicated across a number of different vectors: hardware, capture, transmission, financing, storage, distribution, pricing.

Renewable energy certainly has important tailwinds, particularly when it comes to the regulatory front. But solar panels are expensive, not well-fit for all locations and financing the upfront investment has turned out to be a poor business. Batteries are becoming more efficient, less costly, and the idealistic future for many in the renewable space is solar + battery. Nuclear energy is having a difficult enough time getting buy-in without any crypto component.

One of the biggest issues today with renewables is that many of the sources are intermittent and dependent on weather conditions (i.e. wind does not blow consistently, the sun does not shine 24/7). Integrating these energy sources into existing grids is also challenging – the grid has to manage fluctuating power levels which can strain infrastructure. Traditional power grids were designed for one-way energy flow – from centralized power plants to consumers. 

Distributed energy resources (DERs) require the grid to handle bidirectional energy flows which causes instability, especially when DER penetration is high. These DERs can range from wind turbines and rooftop solar panels to smart thermostats and bespoke fuel cells. 

The elephant in the room is that centralized energy providers in many ways have a natural monopoly – they often own the transmission and distribution infrastructure, giving them significant control and influence over energy markets. Despite these challenges, we’re seeing the advancement of smart grid technology, large-scale battery storage systems and an opening up on the regulatory front to allow DERs to participate more fully in energy markets. We continue to believe distributed energy is fertile ground for crypto and DePIN.

Here’s a shortlist of some of the more developed energy-related protocols and what they’re building today:

  • Plural Energy: SEC-compliant on-chain financing for clean energy investing; democratizing financial access to high-yielding energy assets (think Ondo for energy)
  • Daylight: using distributed energy resources to let developers reprogram the electric grid
  • Power Ledger: developing software solutions for tracking, trading and tracing of energy, built to address the problem of intermittency in renewable energy integration into our power grid
  • SRCFUL: accelerating the growth of renewable energy by creating distributed networks for energy production via individuals (HIP 128 will allow dual-mining through Helium network)
  • StarPower: decentralized energy network that wants to utilize IoT connectivity to enable virtual power plants
  • PowerPod: providing reliable and accessible charging networks globally through a shared ownership model similar to Bitcoin’s
  • DeCharge: electric vehicle infrastructure with the OCPP global standard for EVs (GTM in India)

One of the most discussed topics of energy DePIN is the idea that it might be possible to decentralize the electric grid. Proponents of renewable energy believe that while we need more alternative energy sources, their integration into the grid will put even more strain on already outdated & degrading infrastructure. Technologists will tell you we need more real-time data around energy consumption and storage, better metering infra, and more automation. ESG advocates will tell you that we need a better electric grid that extends coverage to rural and underserved populations. If you’re talking to an AI/ML researcher, they’ll tell you we need $7 trillion to rebuild the electric grid entirely.

Increasing amounts of compute for AI and reshoring means our electricity demands are only set to ramp further. The Decentralized Energy Project (DEP) wrote that “the guarantee of reliability in the legacy power system rests on a foundation of centralized control and top-down engineering.” Not great! The system in place today is still largely an amalgamation of bits and pieces of the 21st century injected into a 20th century grid, built to serve the demands of a resource-intensive society. The DEP raised some fair questions regarding how we might transform the grid:

  • What control architecture would best allow an evolution from a centralized grid to a decentralized grid?
  • What would enable a patchwork of upgrades, while maintaining system reliability and security?

Replacing older parts with newer ones can result in downtime and integrating renewable energy sources is difficult at scale. Meanwhile, the process of generating and transmitting energy is quite complex. Borrowing from some recent research by Ryan McEntush, we can map out the electric grid as such:

  • The US grid consists of three major interconnections: East, West, and Texas; this is managed by 17 NERC coordinators, independent system operators, and regional transmission operators for overseeing economics and infrastructure. Actual power generation and delivery are handled by local utilities or cooperatives.
  • Grid operators use the interconnection queue to manage new connections, evaluate if the grid can support the added power in a specified area, and determine the cost of grid infra upgrades.
  • If you’re an individual homeowner running a solar panel, the process of distributing the energy back to the grid relies on very complicated tech. This includes a metering system, an inverter to convert from DC to AC, FiT systems (in place of metering infra), synchronization systems, and payment infrastructure to ensure a homeowner is properly credited.

“Historically, only 10-20% of queued projects have materialized, often taking over 5 years post-application to finally connect — and those timelines are only lengthening.” 

There’s a significant amount of demand for more electricity and faster ways to get it integrated into the system, but our current infrastructure cannot keep up. The Department of Energy found in a 2023 report that within-region transmission must increase by 128% and inter-region transmission must increase by 412%. These two distinct types of transmission involve the movement of energy between power plants, substations, high voltage transmission lines (HVAC and HVDC) and end users

Within-region challenges include:

  • Grid congestion
  • Challenges balancing supply & demand as more renewables are integrated
  • Blackouts and inefficiencies as lines are overloaded
  • Space constraints for expanding transmission infrastructure
  • Electricity market fragmentation within region

Inter-region challenges include:

  • Line loss as resistance in transmission lines is exacerbated over long distances
  • Inefficiencies from power loss
  • Difficulty synchronizing grids that often operate under different standards, regulations and market structures
  • Capacity constraints at interconnection points between regional grids
  • Electricity market fragmentation across regions

Daylight

Daylight is a protocol hoping to sell user distributed energy resource (DER) data to energy companies that want to make the electric grid more performant, with an end goal where anyone can build a virtual power plant (VPP) from within the Daylight protocol. VPPs are effectively a system of integrated, heterogenous energy sources that provide power to the grid. They’re uniquely suited for DePIN given its incentivization and coordination benefits.

The core user flow of Daylight is as follows:

  • Homeowners can store excess energy produced by a DER; this is standard, as there’s typically enough in excess to produce revenues and sustain a home’s energy consumption
  • In times of high demand, distribute this energy back to the grid
  • VPP operators can aggregate these homes into pools and sell it back to a marketplace

“Instead of needing to pick a single centralized company to administer their energy assets, each home or business owner could delegate responsibility to whomever is paying the most, in real time.”

Daylight wants to make it easier to aggregate DERs across the country, both for energy companies and DER owners. For the homeowners, the value-add is obvious: more money in a simpler way. For energy companies, the value-add is aggregating supply of energy to loop back into the grid. Daylight is one example but there are others working on similar problems using a different approach.

SCRFUL is approaching this problem from a slightly different angle – the team released a governance proposal (HIP 128) within the Helium ecosystem to launch a subnetwork rewarding users for solar power production and battery energy storage.

The idea here is that enabling dual mining for existing compatible Helium hotspots allows Scrful to create a VPP leveraging the existing Helium network. Similar to Daylight, the team will first start with data capture and aggregation and move into building out the VPP. This is particularly intriguing when you consider the importance of distribution and geographic dispersion. Helium hotspot owners are already comfortable with the premise of DePIN and we can assume a motivated user base. 

It’s worth noting the SCRFUL team hasn’t just followed the recent trend of distributed energy but have in fact been building in this space for years. These are just a pair of examples but there’s far more work that needs to be done beyond just decentralizing the electric grid – like financing the infrastructure revamp.

Plural Energy

Plural Energy wants to make it as easy for people to invest in renewable energy as it is to invest in the stock market. Over $4 trillion still needs to be invested just to meet 2030 climate goals. And regardless of where you stand on climate from a political perspective, the returns from private climate investing are attractive, consistent and until now, inaccessible. Plural’s solution is a platform that lets anyone access attractive renewable energy investment opportunities that they would otherwise not have access to.

Mega infrastructure deals are complex, require large amounts of capital, and are typically reserved for large investment banks or high networth individuals with privileged access. Plural is making it easier for anyone to access the “missing middle” of renewable energy investments, enabling access to an asset class that provides uniquely attractive returns.

Plural is on Base and has active portfolios live today.

Glow

Glow is a platform that enables easier deployment of solar farms to areas not previously incentivized for this infrastructure buildout. Their whitepaper describes an issue where it’s difficult to differentiate between solar farms that need financial assistance and those that produce enough revenues to be self-sustaining. Now, whether or not solar farm’s are maximally profitable is a different question. But Glow’s solution for providing financial assistance is unique, where solar farms can only agree to receive carbon credits if they pledge 100% of their gross electricity revenue to the Glow incentive pool. This avoids the issue where revenue-producing solar farms don’t earn an increased share of rewards, as carbon credits would pale in comparison to their otherwise generated revenues off of electricity. 

Glow is a blockchain project utilizing two tokens - the GLW fixed-inflation token and the GCC (Glow Carbon Credit) reward token, representing one ton of CO2 emissions avoided thanks to solar production. Off-chain actors - Glow Certification Agents (GCAs) - verify carbon credits, audit solar farms in the network, and provide on-chain reports about the development of these farms. Glow’s economy functions by paying out 230,000 newly minted GLW tokens each week which are distributed to solar farms, grants, council members, and carbon credit verifiers. 

Personally, we are quite skeptical that carbon credits are sustainably large markets but Glow has been one of the highest revenue-generating DePIN projects over the past few weeks so we would be remiss to exclude them. 

StarPower

StarPower is yet another decentralized energy network that hopes to connect energy devices like ACs, EVs, and home storage batteries to increase energy efficiency and reduce costs across the energy stack. They’re positioning themselves as “Uber for energy” as they aggregate these devices under one roof to more effectively manage operations. The team saw the same opportunity in facilitating the development and growth of VPPs, and their tiered roadmap seems to focus on hardware.

StarPower’s system lets users connect their devices to the network, which earn STAR rewards based on their electricity data and response of the connected devices. While they’re initially focused on water heaters and ACs, they hope to eventually integrate services for EVs and energy-storage batteries. The team has already built a product suite and a plug that’s currently available for purchase, the Starplug. It’s a simple wall plug that can be used with any type of StarPower-compatible device and features real-time monitoring, remote control, and energy optimization.

StarPower’s other products include the Starbattery and the Starcharger. The former is a home energy storage solution that handles excess energy in the grid, while providing backup power during outages and managing optimization during peak energy consumption hours. The latter is an EV-charging solution that ensures EVs are charged during off-peak hours, allowing for lower utilization rates and reducing the load on the electric grid. It’ll be interesting to watch how a bunch of these competing teams go-to-market given their geographic dispersion today. We’ve seen teams run the gamut, focused on US-first, Europe-first and Asia-first GTM.

Power Ledger

Power Ledger is slightly different in that it allows users to trade energy peer-to-peer and market DERs to energy companies. The platform is split into two core pillars: energy trading & traceability and environmental commodities trading. The traceability platform allows for individuals to easily access the energy they’re using and enables P2P trading of excess grid energy. The commodities platform lets traders access a market for carbon credits, renewable energy certificates, and other energy derivatives. It remains to be seen how interesting or durable the carbon credit market is (we remain dubious) but P2P trading of markets this large are interesting nonetheless. It would be naive to pretend the regulatory piece of this puzzle doesn’t exist though given how critical energy markets are.

The distributed energy sector faces a ton of challenges, and is arguably the most complex and nuanced vertical across DePIN. Aside from regulatory clarity, there needs to be a significant overhaul of existing infrastructure. Though this may be viewed through the prism of steeper, more difficult challenges, in fact it offers a significant opportunity to those able to pull this future forward. 

Sure there are a handful of teams fighting for the same pie today, but similar to telecom, the size and scale of energy markets mean there’s room for multiple winners. Decentralizing the electric grid is a top priority, but smoothing the volatility that comes with adding more DERs will be especially critical. Given the complexity and sensitivity of energy markets, we expect the emerging winners will be teams with strong backgrounds in traditional energy who realize crypto provides the best rails to bring abstract ideas to real-world production.

Compute, Storage, and Bandwidth

The role of storage, compute, bandwidth, and retrieval encompasses much of the work required to deliver us the internet we know today. We’ve already talked about the exponential growth of data, but it’s important to understand how it’s managed, stored, and delivered today and whether that may change in the near future.

If storage networks need more demand, compute networks need more supply, and retrieval networks need density to compete, how do we bundle these services together and create a decentralized alternative at scale?

Compute

Decentralized compute platforms are experiencing a growing rise in popularity thanks to an increased awareness of machine learning and a growing appreciation of how valuable compute really is. At a high level, compute is just electricity transformed by a machine to perform a calculation. More precisely, it’s processing power, memory, networking, storage, and other resources required for the computational success of any program.

The discourse around compute resources has evolved significantly with the rise of large-scale models. We know these models require incredible compute resources for both training and inference – GPUs are especially valuable given their capacity for parallel processing. State-of-the-art models require training on massive datasets over weeks or months on high-performance resources. 

To add some context, it cost Meta upwards of $600+ million to train Llama 3.1 405B and there are some estimates that suggest the training compute for large models has been doubling every 3-4 months. Nvidia has quite obviously been the largest beneficiary of the GPU boom as their chips are the dominant hardware for large-scale training. Google’s TPUs and some other custom silicon like Apple’s Neural Engine also play a role here.

Without belaboring a point that many readers here will be aware of, the rapid demand increase has outpaced supply, leading to bottlenecks that hinder further scaling. Some of those key constraints include:

  • Chip supply-chain disruptions
  • Manufacturing limits
  • Cloud resource scarcity
  • Energy & cooling constraints
  • Cost constraints

A recent analysis on data center sizing revealed that “leading frontier AI model training clusters have scaled to 100,000 GPUs this year, with 300,000+ GPUs clusters in the works for 2025” and there are plans for even bigger clusters. In dollar terms, a cluster of 100,000 GPUs would easily cost well over $500 million on the low end, and this isn’t even accounting for the cost of maintaining a system of this size.

Meanwhile, training costs have only gone up and to the right, with some hypothesizing that simply training models with more compute might be one of the only logical solutions.

Dwarkesh Patel summed up the compute race in this December 2023 write-up:

“All this extra compute needed to get self-play to work is in addition to the stupendous compute increase already required to scale the parameters themselves (compute = parameters * data). Using the 1e35 FLOP estimate for human-level thought, we need 9 OOMs more compute atop the biggest models we have today. Yes, you’ll get improvements from better hardware and better algorithms, but will you really get a full equivalent of 9 OOMs?”

So we have all of these constraints but the beauty of this is that it means opportunity. Improving the efficiency and accessibility of compute for these large models is the path forward, and we’re already seeing different approaches being taken here. Model efficiency research includes techniques like model pruning and knowledge distillation aimed at reducing the number of parameters and operations required for inference. Developing sparse architectures, where just a fraction of the model’s neurons are activated for each input, is another avenue for efficiency gains. 

On the hardware side, we’re seeing the largest companies in the world develop specialized hardware designed specifically for AI workloads. Another area we’ve spent a bunch of time in recently is photonic computing – this is a method of using light (photons) rather than electricity for processing data. Photonic chips could significantly drive down energy consumption while speeding up data transmission making them particularly well-suited for scaled AI training. 

Last but not least – and most relevant for this report – is distributed compute. Beyond the simple observation that unlocking unused computational power is useful, what good are these resources if you can’t train models with them? This process we’re referring to is known as decentralized (or distributed) training, a topic that’s become increasingly attractive as an alternative to the black box of centralized AI lab model development. 

Decentralized training can make use of a few distinct processes to better train models across heterogeneous compute. There isn’t one method applicable to everyone’s distributed training workload(s), but it’s likely (even in the near future) a solution can be found. Some of the more popular methods (highlighted in an excellent summary here) include DiLoCo, DiPaCo, SWARM Parallelism, and a few others. 

These distributed training methods are based around an idea of parallelism, which comes in three flavors: data parallelism, tensor parallelism, and pipeline parallelism. Standard GPU clusters are large sets of GPUs placed together in a singular location, organized in a way to produce the most compute power available. But as we mentioned earlier, the number of GPUs you can place in one location is constrained by physics.

Distributed training combats this by attempting to achieve the same capabilities of traditional GPU clusters without putting them all under one roof. This is an active research space that directly ties to some of the work being done by these protocols. Even if Akash or IONet isn’t training their own LLMs yet, there’s a high likelihood that breakthroughs in distributed training will allow them the capability to do so. 

There’s an assumption here that if individuals are properly incentivized (via tokens) to provide quality compute to a global marketplace, the demand side will emerge – but this still hinges on protocols’ ability to manage their heterogeneous compute sources and apply the previously mentioned training methods in practice. The devil is in the details when it comes to implementing this and having the internal capabilities to execute. 

At the same time, the actual token cost is radically shrinking as you might expect. Assuming these costs continue to trend towards zero, alternative methods of achieving distributed training should and will be explored further. If every step of creating a new LLM becomes dirt cheap but hinges on access to computing power, then what? 

Decentralization can offer the opportunity for better distribution, increased scope of rewards, and a protection mechanism against centralization. But without performance parity in production, none of this will matter. The most immediate challenges here center around latency and communication overhead – i.e. synchronizing compute across different nodes for tasks like deep learning training invites inefficiencies. 

There’s also the question of integrity and security – one that can be alleviated with verification mechanisms. And maybe most importantly, there’s uncertainty around consistency and scalability – can the network handle heavy workloads on a consistent basis and adequately load balance.

There’s a long list of decentralized compute providers, aggregators, and marketplaces that exist today. Most of these are built on the assumption that a) blockchains are a natural fit for coordination of interested parties, and b) the presence of token incentives can solve the cold start problem. The user flow for many of these platforms is very straightforward – a participant posts a request for X amount of compute at Y price, and a provider fills this request in exchange for payment and token incentives from the network. Assuming an adequate supply of compute is available on any given marketplace, the demand side follows as the available GPUs should be generally cheaper than centralized cloud offerings. 

As we explore these protocols in greater detail, the goal is to examine the differences amongst them and share our view on particular design decisions. It’s less important to determine which of these will “win” than to examine the path they’re taking and strategic positioning. It’s probably also worth noting that we have a dedicated section for Decentralized AI which follows this one, and there’s some natural overlap to which protocols belong in which category.

  • Fluence: cloudless computing platform
  • IONet: internet of GPUs
  • Hyperbolic: open-access AI cloud
  • Akash: decentralized compute marketplace
  • Render: next generation of rendering and AI tech
  • Livepeer: open video infrastructure

Akash

Akash has been around since before most people reading this report got into crypto. The team wrote about some of the issues plaguing centralized cloud compute platforms today, all of which contribute toward an opening for decentralized alternatives to compete. Specifically, they called out:

  • Prohibitive costs
  • Data lock-ins
  • Permissioned servicing
  • Resource availability

For the unfamiliar, their platform is effectively your textbook compute marketplace. Operating as an L1 in the Cosmos Hub, Akash has focused primarily on offering an extremely decentralized platform resistant to outside pressures while simultaneously delivering an experience on par with what might be available in the non-crypto world.

Individuals looking to provide compute to Akash can use the web app or command line, inputting information like their CPU, memory, and storage. Providers are then able to set their price and receive bids from prospective buyers. On the demand side, consumers can use compute purchased from Akash to build an application without needing to scale up server deployment themselves. 

There are a few native applications offered on Akash, and another 40 or so exogenous applications, L1s, and other protocols that have built atop the network. As of now there are over 419 total GPUs deployed on Akash, with 276 of these available for use, with decent variety through a few dozen A100s and H100s.

The key components of Akash’s network architecture include blockchain, application, provider, and user layers. Akash the L1 is built on top of the Cosmos SDK and Tendermint, governed by the AKT token. Their application layer manages deployment, orders, bids, and leases (connection between consumer and provider). The provider layer is a bit more complicated but consists of data centers, cloud providers, and individual server operators that run Akash Provider Software to manage all network activities. Akash uses Kubernetes and Docker Swarm to manage user deployed components, letting them scale the number of deployments and users without needing to manually juggle all of this. The user layer basically describes any interactions between Akash and prospective compute consumers, managing interactions between them and interfaces on the app. 

There’s also the Akash Node which manages blockchain synchronization, transaction submission, and the querying of the network state – all of this is somewhat dense but more info on it can be found here. Akash has benefited from its easily accessible open-source codebase, consistent communication with the community, and maybe most importantly, its token appreciation on the back of the AI narrative. Though I think if you gave Akash supporters truth serum, they would agree the platform has been difficult to use. Anecdotally we are seeing far leaner teams ship faster.

The Akash Accelerationism blog post from Q1 2024 detailed their proudest achievements and their aspirational path in the future. These include open-sourcing the entire Akash codebase, introduction of a “DAO-like” entity, and increased access to GPUs on the platform. Some of their recent efforts include competing against centralized AI labs on running vLLMs through Akash, collaborating with Prime Intellect on GPU deployments, and exploring decentralized model training with FLock.io on Akash.

As far as the race for more GPUs goes, Akash still has to catch up. 400+ GPUs is great but it pales in comparison to centralized compute providers like Lambda which has thousands of GPUs and relatively affordable pricing. There isn’t any indication that Akash has pursued distributed training just yet, but if their supply expands, they’re a candidate for some early experimentation.

Livepeer

Livepeer is a video infrastructure network. This excellent primer describes a handful of problems related to internet bandwidth – today 80% of all bandwidth is consumed by video streaming. Streaming video requires transcoding prior to distribution of content. Transcoding is a process where video files are converted from one format to another in order to better fit the file across different devices of varying specifications. 

There are multiple different types of transcoding methods and the process is beyond the scope of this specific report. It requires a complex sequence of steps involving de-multiplexing files, video encoding, post-processing, multiplexing, and the utilization of specialized components for the formatting of audio data. It won’t be covered in-depth here, but here's a link to a great explainer for those interested.

So, video transcoding is expensive, and can cost “around $3 per stream per hour to a cloud service such as Amazon, up to $4500 per month for one media server, and up to $1500 per month before bandwidth for a content delivery network” which is a long way of saying this process is unfeasible for most teams to do in-house without significant capital expenditures.

Livepeer wants to provide the infrastructure for developers to create live or on-demand video at over 50x cheaper costs. They achieve this through the use of orchestrators and delegators, the two key participants in the Livepeer network.

Orchestrators opt into the system and run software that opens up access to their CPUs, GPUs, or bandwidth, which gets used to run transcoding processes on user-submitted video. In order to complete this work, orchestrators must own LPT tokens - the more they own, the more work they’re eligible to do, resulting in increased LPT rewards for their services. Delegators are tasked with staking their own LPT tokens towards orchestrators doing “good work” on the platform, offering them a share of fees for work performed and future LPT token emissions. This stake-for-access model lets Livepeer simultaneously scale their economics as demand for the platform increases.

Livepeer’s business model uniquely incorporates its token and the vertical they’re competing in is both topical and growing. Short-form video content is being used across nearly every business vertical – marketing, comedy, news distribution, and everything in between. Creators across platforms like TikTok, Instagram, and Twitter are increasingly relying on short-form content to drive engagement. Additionally, global wireless data transfer growth is compounding at over 25% annually, driven primarily by this mobile video traffic. Assuming this trend continues, Livepeer should be able to capture a reasonable amount of this demand if they can maintain the comparatively cheaper costs of transcoding.

The network is very simple to understand and quite easy to get started on – a true rarity in crypto. Orchestrators run a node and state the price they’re charging for transcoding services. The Livepeer node then routes these tasks to your GPU(s) and gets started without any manual interaction on the orchestrator’s side. Consumers are granted the services they want at a much lower cost, orchestrators earn with their otherwise idle GPUs, and delegators earn fees assuming activity is high and proper delegation has occurred. It’s actually a quite elegant design.

So far Livepeer has transcoded almost 427 million minutes of video and seen over $800,000 in fees paid. It’s also worth noting that Livepeer continues to be led by its original founder. Video is everywhere, especially short-form video, and it’s likely that the increase in content creators globally might eventually lead to a situation where video platforms charge users for the right to post content, though we aren’t assuming all consumers flock to a decentralized solution immediately. In a market where centralization is a growing concern, Livepeer’s solution looks favorable, making it one of the more unique DePIN projects out there. 

IoNet

Next up is IONet, which is another compute marketplace trying to compete against Akash and everyone else. IONet’s messaging is a bit chaotic but they are primarily a decentralized hub where buyers and sellers of compute can come together in a more efficient way to transact. They also mention a bunch of other smaller features like a commitment to green data centers, better security compliance via SOC2, streamlined cluster deployment, and instant payments through Solana. None of these things will matter if they don’t provide easy and cheap access to compute.

IONet believes its differentiators are its speed of deploying GPUs and its ability to offer 90% cost reductions when compared to traditional providers. Looking at Akash as a reference, we see that the price for one hour of H100 usage costs $1.45; one hour of H100 usage on IONet costs $1.19 - this counts as a win in the decentralized compute battle, but let’s compare to H100s offered on Lambda.

Looking at Lambda’s home page we can easily see it costs $2.49 an hour, giving Akash and IONet 50% cheaper rates. It’s possible that this stated 90% cost reduction comes if you’re using more GPUs or renting them for longer periods of time. We also can’t pretend like there hasn’t been quite a bit of controversy surrounding IONet spoofing GPU counts on their website. 

It may be an argument for a different forum, but the people actually using any of these compute platforms know who does and doesn’t have the access they advertise – this can be found pretty easily by requesting services from any of these protocols. The burden now rests on IONet to prove they are building a real platform and can achieve efficient cluster coordination as they scale their GPU supply.

Hyperbolic

Hyperbolic is an open access AI cloud that lets you provide compute, run inference, and access comparatively cheaper GPUs. They believe AI should be an openly shared asset, but hold issues with the current state of open source AI; teams are open sourcing some of their code, but the fundamental piece powering AI – compute – is being hoarded by centralized data centers. They claim there are over two billion computers in the world that are inactive for over nineteen hours everyday – what if there was a way of taking idle compute and repurposing it for the greater good?

With Hyperbolic, now you (allegedly) can. 

The platform functions similarly to others already mentioned: individuals with idle compute post it to Hyperbolic, consumers of compute purchase it at specified hourly rates, and everyone gets what they want. For providers, this comes in the form of rental payments and points (assuming these eventually become a token). Here are some of Hyperbolic’s rates, not including a 10% platform fee:

On the inference side, Hyperbolic offers access to text-to-text, text-to-speech, and text-to-image models, text-to-video models and fine-tuning services. As a quick aside, AI inference is the process of taking a trained model and using it to produce more accurate outputs based on feedback and specific inputs. Inference requires less compute but can still be resource-demanding, and Hyperbolic offers a base rate of sixty requests per minute for free users and an increased rate of six hundred per minute if users hold an account balance of over $10. There’s also an enterprise tier, which has hourly rates ranging from $0.30 an hour to over $3.20 depending on the model used – the actual number of models available on the platform isn’t currently visible…

Hyperbolic has marketed to both crypto and non-crypto machine learning enthusiasts, most notably through a recent tweet from Andrej Karpathy. In Karpathy’s case, he described Hyperbolic as a nice alternative to communicating with base LLMs, like Llama 405B. Hyperbolic has also been referenced as a key piece of the tech stack in novel research surrounding multi-human, multi-AI interactions thanks to its ability to efficiently run Llama 405B. The platform currently supports a few variants of both the Hermes and Llama models, Stable Diffusion models for image generation, and Melo TTS for audio generation. 

A note on verification

We noted earlier here that verification could play a role when it comes to security & trust – most compute protocols today aren’t doing optimistic or zk-based verification or any other variations (TEEs, MPC, etc.). But as supply and demand scale, there may be a heightened need for some level of verification.

When it comes to the verification of compute, we can broadly break these down into:

  • no verification mechanism
  • optimistic verification mechanism
  • zero-knowledge based verification mechanism

Thus far, the presence (or lack thereof) of verification has not made a difference to the initial success of these projects. Teams are aggregating compute and marketing themselves on an assumption that the demand side is predominantly focused on cheaper GPU access – realistically this is a fair assumption in our opinion. But at some point the process of verifying this compute and offering cryptographic guarantees to users will become more important. These guarantees remain one of the most underappreciated benefits of the sector. Access to cheaper GPUs is the first step, but cheaper GPUs with built-in and immutable trust mechanisms are the natural evolution of these platforms. 

Moving forward, the solution to compute constraints is going to involve some cocktail of hardware advancement, software optimization and distributed network models. In the immediate term this probably looks like continued development of AI-specific hardware though alternative computing (photonics, quantum) will play a role in the longer term. The software optimizations should lead to more efficient models with improved energy consumption. But where crypto is best positioned to compete and add value will be on the distributed compute network side. As these networks evolve, they may become key enablers of large-scale AI training (something we will go into more detail in a bit), particularly for those without access to centralized cloud infrastructure.

Decentralized Storage

Decentralized storage is one of the foundational pillars of DePIN, largely thanks to the work Filecoin has put into its product suite since launching in 2017. 

It wouldn’t be a write-up about storage if we didn’t include Filecoin. Aside from developments made in the past 1-2 years, Filecoin was decentralized storage for the longest time. It’s easiest to walk through what decentralized storage is by giving a high-level overview of Filecoin, the decentralized storage network designed to store humanity’s most important information

Filecoin functions as a peer-to-peer network for individuals to store and retrieve data across the internet. Traditional services that require sophisticated levels of data storage utilize either centralized systems or build it themselves. Filecoin’s goal is to fix this and offer bespoke or umbrella data services to everyone, anywhere in the world, as simply as possible.

The Filecoin network revolves around storage providers and a customer base that has a desire to pay for data services. Storage providers are essentially computers (represented as nodes) operating within the network that store user files and provide verification that these files have been stored appropriately. Customers pay a fair market price based on storage availability at the time of payment, depending on the scope of their demands and storage requirements. 

The storage market is where storage providers and clients negotiate to bring these deals on-chain, with the lifecycle of a deal split into four parts: discovery, negotiation, publishing, and handoff. Clients identify providers, negotiate their terms, publish the deal on-chain and finally send this off to sectors, basic units of “provable storage” where storage providers can verify the data and complete these deals.

The next most important part of the stack involves the actual retrieval of user data, as there isn’t a point towards decentralized storage if you can’t easily access it. In its current form, Filecoin nodes support direct retrieval from miners and can pay FIL tokens after sending a data request with instructions attached. 

Retrieval requests must include the storage provider ID, data content identifier (CID) and address used to originate the data deal. CIDs are used within the Filecoin network to identify files submitted – this ties back to the concept of “pointing” at data on-chain to aid in the verification process. If there’s a large amount of data within a system, there must be an accompanying sorting process to help make sense of it. This is where CDNs come in, with Filecoin utilizing its very own decentralized CDN – Saturn -- to manage all of this. Saturn functions as a two-sided marketplace where node operators complete tasks for FIL and customers manage/retrieve their purchased data.

Filecoin’s team deeply understands how difficult it is to see the whole picture of what it is they’re building -- the Filecoin TLDR is an excellent example of a resource so many crypto projects need. 

Let’s break down these numbers. Total storage capacity refers to the five exbibytes (EiB) of data capacity for the Filecoin network, or equivalent to approximately 1,152,921,504,606,846,976 bytes. Two EiB total active deals size is referring to the amount of storage capacity currently being used by the Filecoin network as a whole. This is a lot of data. To contextualize this, two EiB is roughly 682 million hours of HD video content or 512 days’ worth of Facebook’s global data.

The Filecoin Foundation wrote an excellent summary of the network’s status back in May of this year. As of then, there were over 3,000 storage providers systems operating on Filecoin and over 2,000 clients that have submitted datasets to the network, along with 200% YoY growth in deal origination data size. 

Filecoin’s most important metrics are arguably active deals and current effective computing power, which shows 2,925 active contracts (across 2.8 million transactions) and 22.62 EiB of power capacity. The actual max data capacity is irrelevant to the discussion if the demand side hasn’t even come close to being exhausted. 

Messari’s July 31 State of Filecoin’ report gives a more updated outlook on Filecoin and some additional metrics, most notably its $273 million of FVM TVL and 3% QoQ storage utilization growth. The report does a good job of highlighting Filecoin’s current goals and where it wants to be in the future. This includes a revived focus on enterprises through a grants program and increased attention to “AI-oriented” projects building in the ecosystem. Everyone wants a piece of the AI narrative. 

Filecoin has aspirations beyond just storage though, as the protocol has introduced the Filecoin Virtual Machine. We won’t spend a ton of time on this given we’re focused specifically on storage, but the FVM is similar to many of its competitors – it’s a standard runtime environment for smart contracts built on top of the Filecoin network. Their docs describe the Filecoin storage and retrieval network as the base layer of the system, with the FVM standing as a layer enabling programmability – similar to how the EVM sits atop Ethereum’s architecture and manages the software we interact with on-chain.

As far as decentralized storage goes, Filecoin is king – mostly because decentralized storage has yet to build a compelling product to compete with centralized providers. Yes, Filecoin works in theory, and the team is actively iterating, but any decentralized storage product has to compete with the AWS and Azures of the world.

Lightshift wrote a Filecoin thesis in 2023 and gave a concise summary of why putting data on-chain even matters. As of 2021, just four players held a 70% market share, with AWS and Microsoft Azure comprising 54% of this themselves. Beyond centralization, they cited regulatory concerns, lack of data verifiability standards, and insufficient trust mechanisms as major pain points for the centralized cloud and storage market. 

“Far beyond money applications, blockchain is positioned to safeguard our data sovereignty, identity, or any other type of ownership.”

This is a longer-term trend we very much believe in. Self-sovereignty is a trend that appropriately started with financial sovereignty on the back of the Great Financial Crisis. We continue to believe this principle will permeate beyond finance and into the other institutions we interact with: data, healthcare, education and climate are just a few examples.

Jackal

Jackal is an L1 that wants to provide secure storage solutions in a multi-chain world. The protocol functions as a trustless cloud storage solution designed to make other blockchains more efficient storers and facilitators of data. Through smart contracts and the IBC network, Jackal makes it easier for any other blockchain to manage data-intensive apps and scale their data needs.

Jackal publicizes six second blocktimes and $0.0004 transaction costs on the L1 side and 500+ Mbps download speeds across over 221,000 files stored on the data side. The tech stack is quite similar to Filecoin: storage providers create contracts for storing user files, validators participate in Jackal’s Proof-of-Persistence (PoP) consensus, an AES-256 encryption mechanism is used, and smart contract modules manage core protocol functionality. The JKL token is used like other L1 tokens for chain validation and transacting. This is all represented in a protocol organizational chart that gives Maker a run for its money…

Jackal’s PoP mechanism operates as a means to ensure storage providers maintain the availability and integrity of data on the network. Where validators in a PoS network are more inclined to ensure validators come to consensus on the next block, PoP validators are most concerned about ensuring this data integrity. To achieve this, storage providers must periodically attest to the fact they hold the data they committed to storing, made possible through random challenges. Candidly, the proof and ability to retrieve data has been a contentious point for some decentralized providers.

The main value proposition behind Jackal is its ability to persistently store this data, encrypt it, and maintain its state over time. On the centralization front, existing storage providers are prone to outages and data leaks, whereas Jackal is in theory resistant to this thanks to its network of distributed storage providers and encryption methods. Users of Jackal can organize their submitted data into a file tree system, where private data sharing can be achieved between users via cryptographic keys. 

When a file is uploaded, a corresponding smart contract is formed representing the relationship between a user and the storage provider, outlining the details of their arrangement and pricing. Providers are incentivized to maintain this data through earning JKL tokens, while users are incentivized to submit data as their participation across the ecosystem earns them JKL.

“Currently, it is the only storage network that has the following features. Self-custody, on-chain permissions, programmable privacy, peer to peer file transfer, cross-chain functionality, hot-storage speeds, a modular application specific blockchain, proof of stake blockchain, protocol managed redundancy, protocol managed storage deals, and much more.” 

Looking at Jackal today, the platform is still in its early stages. It’s difficult to find more granular data so their published metrics are what we have. With speeds of 500+ Mbps, Jackal has obvious strides to make on performance given AWS and Azure have multi-Gbps download speeds.

Arweave and AO 

Next up is Arweave, which is arguably just as well known as Filecoin and recently en vogue thanks to its AO Computer product push, with token price appreciation also leading to more eyeballs on this network. Arweave is a protocol built for permanent information storage, a decentralized web of information within a permanent digital ledger. You can upload data, build dApps, use Arweave as a database and launch smart contracts making it a data-focused L1 more similar to Filecoin but structurally similar to Jackal.

Unfortunately, the information surrounding Arweave is organized in a somewhat frustrating method but the main node of info begins here. Miners store and replicate AR tokens, replicating the dataset as many times as possible each block through the testing of Succinct Proofs of Random Access (SPoRAs).

To combat storage redundancy, Arweave’s miners must prove they had access to the network’s data as of the previous block, ensuring the computations don’t infinitely spiral out of control. Arweave’s core product behind all of this is what it refers to as the permaweb, a subset of the entire internet stored permanently within the network. 

Where Filecoin focuses on more flexible storage that varies depending on its customer base, Arweave is building a permanent internet resistant to all types of centralization or outside forces. This is an entirely different type of product and feels more like a science experiment scaled up to the highest degree than it does a business.

Arweave generates revenue by charging one-time payments for accessing the network, with the protocol accruing AR tokens and additional costs going towards the Arweave endowment system. This gradually grows in size as more users provide data to the network, with Arweave assuming linearly decreasing storage costs ensuring enough capital to back up the data far into the future. 

More recently, in a very long post, the Arweave team outlined their vision for the AO Protocol – a “decentralized, open-access supercomputer.” AO Computer acts as a virtual machine with computation and storage fully integrated, allowing devs to build and execute dApps in a singular environment. The TLDR is that it’s designed to let you run dApps that tap into Arweave’s storage – where data is permanently stored via the permaweb – while adding computation on top.

If you’d like a bit more detail about AO’s system and the protocol side, feel free to read below.

AO is an interesting expansion for Arweave and has drawn attention back to the ecosystem. The team’s twitter is active with brief product updates, ecosystem developments and an incubator program built for AO dApps. As with most new platforms like this, the question will come down to differentiation and liquidity. In this case, there’s some brand cache but we’ve yet to see a compelling advantage to building here. Doesn’t mean we won’t but the burden of proof rests on the AO and Arweave team shoulders. In an ideal world, AO can serve as a kind of data or information funnel for Arweave and its permanent data collection scheme. Once again though, without reliable performance of that data storage, nothing else really matters.

There’s a risk here that decentralized storage ends up as a commoditized product. On the one hand, yes a more reliable and resilient storage network that has no single point of failure is appealing, but that doesn’t necessarily mean we’ll see direct value capture from it. Especially given the nature of open-source software. It’s still a compelling space given our society’s insatiable data consumption appetite, but value accrual dynamics and moat-building remain unclear. 

Bandwidth, Retrieval & Content Delivery Networks (CDNs)

Incidentally we have written a bit on this here. If you are too lazy to click for a fuller description, here’s the TLDR:

In some respects, CDNs are the backbone of the internet. They are facilitators that connect users with digital content — geographically distributed groups of servers that ensure users receive their beloved content upon request. Almost everything you do on the internet involves a CDN: opening email, browsing, and sending messages all depend on the multi-billion dollar infrastructure of CDNs. In the traditional world, Cloudflare and Akamai dominate.

Messari actually highlighted how Cloudflare possesses a DePIN-like flywheel of its own – their CDN services are free and allows them to see more traffic which spirals through their R&D and UX-related network effects, inevitably leading to better data and higher revenues from improving their tech. Obviously Cloudflare lacks a token incentive for its users, but the service alone is valuable enough that the free barrier to entry makes it a worthwhile decision for almost anyone needing a CDN – and that’s quite a large potential customer base. 

So why bother building a decentralized version?

The challenges and limitations here are some of the common ones we see that make distributed versions enticing. Traditional CDNs rely on a centralized network infrastructure which comes with single points of failure, vulnerability & reduced resiliency to attacks and bottlenecks. There’s limited scalability because meeting increased demand is costly and complex while maintaining server infrastructure and expanding the network is expensive. There’s also a latency issue when it comes to reaching remote regions as it’s not economically viable to have servers in every location across the globe.

A decentralized CDN is a network of nodes that collectively manage the tasks of a traditional CDN, making use of the P2P model to reduce the maintenance burden and offer a better user experience. By distributing content across a network of nodes that are independently operated, these networks can eliminate single points of failure while also allowing for dynamic scalability. As more users join the network, the capacity and performance of the CDN scales, drastically reducing the need for costly infrastructure build-out.

There are a handful of companies working on distributed CDNs and while none of them can match the output of incumbents like Cloudflare, there exist tailwinds that give them a shot at future success, even if initial traction has been limited.

Fleek

Fleek is an open-source, edge computing platform that wants to improve the execution of decentralized web services, including CDNs. The Fleek network consists of distributed nodes that manage services provided and consumed on the network, ranging from database management, CDNs, name services, and much more. Fleek has a separate site dedicated to its CDN-related services, with products like IPFS hosting, gateways, domains, and decentralized storage.

While crypto likes to virtue-signal about decentralization, many of the services powering our industry are not as decentralized as you’d expect. Fleek saw an issue with this and wanted to create a platform where valuable decentralized services could be offered in a secure and transparent way for everyone. They achieve this through the use of a Proof-of-Stake consensus mechanism, SNARKS, and the Narwhal + Bullshark consensus mechanisms.

Fleek manages a few different components to run the network, maintaining the state of token balances, staking details, data updates, and node reputation. This is pretty straightforward to most DePIN projects, as the state needs to be evenly distributed or replicated across all nodes. The four main actors within Fleek are clients, developers, end-users, and node operators.

In order to get paid on Fleek, node operators need to stake the native FLK token to perform work and become eligible for rewards. This income can come through providing shared cached data or providing cached bandwidth requests, with rewards scaling based on the amount of bandwidth delivered. Upon delivery of this data, SNARKs are used to confirm that a client has successfully fulfilled a request — these are referred to as Delivery Acknowledgements (DAs). These DAs are eventually gathered up and submitted in batches which are eventually routed through the protocol and sent to consensus.

Like other networks, Fleek wants to ensure its operators are providing the best quality data and not gaming the system. Their solution is a reputation system where nodes rate their peers and form aggregated scores at the end of every epoch. This info gets sent to other participants and is used to optimize different parts of Fleek — network flow, task assignment, and proximity of nodes.

Fleek’s performance can’t fully be judged against other projects as it’s still in testnet, but initial figures are interesting. Using information from a February 2024 testnet wrap-up, Fleek saw over 9,342 calls made on the network over a two-week period, with an average time to first byte (TTFB) of 37.02 milliseconds. This metric measures the ability of Fleek to go from a data request to the first transmission of data received, achieving speeds 7x faster than AWS Lambdas and 2.7x faster than Vercel. These benchmarks were chosen as AWS Lambdas and Vercel are highly-performant industry standards for similar processes – deploying serverless functions in the cloud computing space. It remains to be seen what this actually looks like in production though.

Recent developments from Fleek claim mainnet will come by 2025. The team also introduced Fleek’s first product, Fleek Functions. These are specialized code snippets that offer high-performant and more economical server-side code operations on the network. The team has also been working on a few technical improvements like improving node request handling capacity and improving node synchronization. It’s worth calling out that Fleek is pushing on idea generation with a repository of product-building ideas exclusive to Fleek, and in-depth development tutorials.

Fleek will be competing with some serious players in AWS and Vercel. For context, AWS alone generates tens of billions of dollars in revenue each year and is widely regarded as the definitive cloud computing provider. Vercel is much smaller, but has managed to achieve over $100 million in annual revenue in a short period of time. In some respects this is encouraging for those building in this space on the crypto side, as we know how quickly crypto revenues can scale. It will be interesting to watch how Fleek develops both on the technical side but also the product side as it comes to mainnet. In theory, you should be able to build tools ranging from an on-chain Dropbox to a trusted oracle network for usage across other crypto networks.

Meson

Meson is building out a decentralized marketplace to consolidate and monetize idle bandwidth. Bandwidth is just the rate at which data can be transmitted or sent across a network in a given period of time, usually measured in bps (bits per second). Seeing the exponential growth of streamed media and the increasing amount of time individuals spend consuming it, Meson envisions a world where everyone can access the bandwidth they need on-demand. The traditional CDNs powering this information transfer are increasingly running into scaling and cost issues. Meson suggests the bigger problem here is a lack of choice for the end-user though we are skeptical this will move the needle. Regardless, they believe the vast amount of idle bandwidth can be used as an alternative to centralized monopolies. 

Meson offers a marketplace for people to trade their bandwidth with two core products powering this: GaGaNode and IPCola

GaGaNode is a decentralized edge / cloud computing platform that powers the bandwidth collection side of Meson’s business. Users download the GaGaNode extension on any of the devices they’d like to sell bandwidth from – GaGaNode then aggregates this into Meson’s DePIN network. Miners on Meson are able to send this bandwidth directly to GaGaNode with the data optimized and routed prior to delivery. 

IPCola manages the “ethical sourcing” of proxies used for managing vast amounts of data and data extraction purposes. Their docs are somewhat dense, but IPCola is essentially a global service that offers bespoke APIs for accessing over 70 million active IPs for the sake of easier data extraction. This is a bit ambiguous, but as it relates to CDNs and Meson, IPCola is tasked with providing “residential bandwidth” to operators within Meson. 

Meson has been quietly building out their product ecosystem and is active on Twitter. It’s still unclear which of the decentralized CDN providers has a compelling product in practice and what types of demand they’ll ultimately service, but the tech is fairly interesting. Distributing content is experiencing a bit of a crisis when it comes to cost, and so crypto has a real chance to play a role here. There are a few others like Blockcast, Filecoin Saturn and AIOZ that are also worth exploring.

Pulling these together a bit, we have three distinct parts of the compute/storage/bandwidth stack (given here by Messari):

  • Perpetual, encrypted, or verifiable storage on-chain
  • No overhead or contractual lock-ins with decentralized compute provision and p2p marketplaces
  • A dense, global network of nodes to enable faster and cheaper packet routing at scale

While these components are still segmented and slightly siloed, there’s no reason to believe crypto won’t see shared standards between VMs and L1 environments over the next 3-5 years. We’ve already witnessed the birth and subsequent growth of the modular thesis, increased awareness of the need for cross-chain standards, the evolution of bridges, and the never-ending deployment of new L1 and L2s in the past twelve months. 

Cyber Fund shared a Decentralized AI recap stating “developers should experience no UX difference” in reference to decentralized compute solutions – but this is good framing that extends to the entire stack of compute, storage, and retrieval. The ideal end state for the decentralized stack is a more performant, capital efficient system capable of attracting customers who may not even be aware of how crypto works. 

They don’t have to understand the minutiae, all they want is a better product: DePIN can offer that.

Crypto has an innate desire to be shared and distributed without arbitrary borders, and DePIN protocols need to lean into this narrative. Instead of targeting singular parts of the stack, there should be coordinated efforts between these protocols to collaborate and build the universal standards necessary to compete with centralized counterparties. This won’t happen in a day, and it definitely isn’t a problem exclusive to DePIN, but this meme rings true for a reason.

Artificial Intelligence and Decentralized AI

We mentioned previously there would be some overlap between the compute discussion and this decentralized AI section. In reality we probably could have combined these but we didn’t, so deal with it. It’s also fascinating to watch the positioning different crypto participants take regarding where crypto x AI fits in and whether it should be viewed as DePIN. The broader DePIN cabal seems intent on claiming AI as a subsector so this can serve as a barometer for the Decentralized AI (DeAI) space broadly.

As it stands, these are the AI-focused protocols building in DeAI with either real products today or a path to something interesting in the future:

  • Prime Intellect: democratizing AI development through organizing compute and GPU training clusters.
  • Bittensor: an open-source decentralized protocol that creates a peer-to-peer marketplace that converts machine learning (ML) into a tradable product.
  • Gensyn: hyperscale, cost-efficient compute protocol for the world’s deep learning models.
  • Prodia: decentralized image generation at scale
  • Ritual: building a way so any protocol, app, or smart contract can integrate AI models with minimal code.
  • Grass: data layer for crypto that rewards users for their data which is then used to train AI.

“Crypto wants information to be onchain so that it can be valued and add value to the system. AI wants information to be onchain so that it can be freely accessed and utilized by the system.” — Jacob, Zora

It can sometimes be difficult to find protocols in crypto with product market fit. Infrastructure gets funded over novel apps, user attention is fleeting, and narratives change constantly. One narrative that’s managed to stick is that AI is extremely centralized and crypto can fix this. The discussion of closed-source versus open-source AI is one that extends beyond crypto – large AI/ML labs like Open AI and Anthropic hold their model weights and training data hostage, only releasing modified variants of frontier models to the public in the form of chatbots. Some argue that the difference between building closed or open-source AI is a national defense threat. Others just want the freedom to build with the most performant, up-to-date variants of LLMs – crypto can help fix this.

There are a few different components of Decentralized AI worth discussing: verification, compute aggregation platforms, decentralized training, and software-focused infrastructure. Some of the main problem areas worth exploring relevant to DeAI include the problem of optimistic versus zero-knowledge verification, on-chain ZK proofs and the efficient usage of compute across heterogeneous sources. Ultimately though, these lofty goals must be viable economically and technically on-chain.

Prime Intellect

Prime Intellect is the team most clearly shipping a real product, with exciting, frontier technology and as a result, is being recognized outside of just our crypto bubble. Last month they publicly launched the Prime Intellect compute platform – the platform aggregates cloud from centralized & decentralized providers, offering instant and cost-effective access to high-demand GPUs (H100’s, A100’s, etc). This has long been an issue for distributed compute platforms in crypto; oftentimes the GPUs they had access to weren’t in-demand, high-performance ones. Not only that, the team expects to launch on-demand, multi-node clusters as early as this month as well.

While aggregating compute is just the first step for Prime Intellect, they also recently published OpenDiLoCo, an open-source implementation and scaling of DeepMind’s Distributed Low-Communication (DiLoCo) method, enabling globally distributed AI model training. They trained a model across 3 different countries with 90-95% compute utilization and scaled it to 3x the size of the original work, proving its effectiveness for billion-parameter models. They presented their work at one of the largest AI conferences and saw the original DeepMind DiLoCo authors singing their praises.

Prime Intellect’s value prop is far more clear and straightforward than a lot of what we see in crypto x AI. They’ve effectively set out to democratize AI development at scale and are building a platform to simplify the process of training state-of-the-art models through distributed training across clusters. This platform, if successful, will tackle key bottlenecks in large-scale AI development, not least of which is the complexity of training huge models across clusters. Those contributing compute can earn ownership stakes in these models – anything from advancing language models to accelerating scientific breakthroughs. Although we’re slightly biased, it’s obvious to us there’s no other team like Prime Intellect in this space. It will be worth following how Prime Intellect navigates some of the recent developments in distributed training from Nous Research or more generally, the improved capabilities from consumer-grade hardware products being offered by Exo Labs.

Bittensor

Bittensor is one of the most recognized DeAI protocols in the crypto community, largely thanks to its first-mover advantage and its token acting as a sort of meme for AI. Bittensor is a system of many moving parts (or subnets) that wants to bring nearly every aspect of artificial intelligence curation, training, and transmission on-chain. The Bittensor protocol is essentially dedicated to making AI a more decentralized ecosystem  through its emphasis on community collaboration and the allocation of work to its subnets. There are 30+ subnets on Bittensor, though the team has recently expanded the limit to 45.

These subnets are compartmentalized and each handle one aspect of the broader AI stack; there are subnets dedicated to image generation, chatbots, training, price data, and everything in-between. Bittensor’s tech stack is a bit complex, but their approach reminds us of Cosmos and its many orbiting application-specific chains within the Cosmos Hub. In this analogy, Bittensor is Cosmos and its subnets are the app-chains; all of these are bound by TAO, Bittensor’s native utility token that’s earned via contributions to the network.

The Bittensor network consists of miners and validators. Miners are the ones that run the AI models posted to the Bittensor network, engaging in competition to see who can produce the best outputs, receiving TAO tokens in return. Validators are tasked with reviewing model outputs from miners, verifying their results, and reaching consensus on these rankings over time. TAO’s tokenomics are structurally similar to Bitcoin’s, with just 21 million tokens set to ever be mined within the network.

A Galaxy report on DeAI noted: “Miners can generate those outputs however they want. For example, it is entirely possible in a future scenario that a Bittensor miner could have previously trained models on Gensyn that they use to earn TAO emissions.”

The paradox of Bittensor is that while their goals are extremely ambitious, there’s growing skepticism about the model. They cite the usage of a mixture-of-experts (MoE) machine learning approach that splits the work of a model into distinct parts rather than using it as one whole. MoE is still relatively new and Bittensor’s subnet model requires each task to be separated from the core functioning, versus other major LLMs that are more cohesive and usually tied to something like a chatbot.

As it stands, Bittensor’s model is not actual, true MoE – it’s just a fancy way of saying their subnets are unique and manage different parts of the Bittensor ecosystem. Bittensor claims to have utilized MoE to “compound” the performance of their various models, though this is far from how machine learning works. It’s not technically feasible to do something like this as models can’t stack and multiply their capabilities, though this doesn’t mean MoE isn’t worthwhile. This report is a good primer on MoE papers though it’s about a year old now.

Bittensor’s best quality today is the community and its ambitious pursuit of building the entire AI stack on-chain; their main issue (beyond product) is a lack of sustainability in TAO emissions, as rewards are given out without any actual progress to show for it. There are other networks like Morpheus that only reward miners based on demand generated on the platform, while Bittensor does the opposite. What this means in practice is that it doesn’t matter if a subnet sees no demand on the Bittensor network, as you still earn token rewards independent of network demand. Until this fundamental flaw is fixed, Bittensor remains an intriguing thought experiment but the path to commercializing sustainably remains unclear.

Gensyn

Building a decentralized training and compute platform, Gensyn wants to reinvent the way models are taken from concept to finished product.

Their litepaper describes the current landscape of computational complexity doubling every 3 months for new LLMs. Today, there are more models than ever with more companies offering compute of all kinds. Gensyn claims the current solutions are either a) too expensive, b) too oligopolistic, or c) too limited in technical scale. In order to efficiently and effectively produce an LLM, there needs to be a solution that can be verified on-chain in a scalable way.

Machine learning is “inherently state dependent, requiring new methods for both parallelisation and verification,” resulting in a situation where current solutions are only capable of doing very simple tasks with their aggregated compute – Gensyn cites 3D rendering as an example. They want to create a “protocol which trustlessly connects and verifies off-chain deep learning work in a cost efficient way,” but there are some major roadblocks in the way.

“Central to this problem is the state dependency of deep learning models; that is, each subsequent layer in a deep learning model takes as an input the output of the previous layer. Therefore, to validate work has been completed at a specific point, all work up to and including that point must be performed.”

A useful analogy to describe this can be found in the differences between fraud and zero-knowledge proofs in crypto. ZK proofs are revolutionary because they allow for a vast reduction in work to prove that something was done, at the cost of some very complex mathematics and cryptography. Gensyn gets around this by avoiding the scenario(s) where work must constantly be done and duplicated in order to achieve an agreement of confirmation, meaning they hope to be faster, cheaper, and more scalable.

Fraud proofs need to re-execute specific parts of a blockchain to prove a transaction was malicious. What Gensyn describes is somewhat similar, where machine learning outputs can’t be efficiently verified without a very redundant amount of work being re-executed. Other problems highlighted include parallelization, a lack of privacy-centric design, and the dynamics of building out a new marketplace.

Their solution is ultimately something that takes the shape of an L1 protocol that directly rewards supply-side participants for their compute contributions to the network, achievable without any administrative oversight or extensive management – something you see evidence of in Bittensor’s Discord server.

Gensyn claims to achieve a verification solution that’s over 1,350% more efficient, utilizing probabilistic proof-of-learning, a graph-based pinpoint protocol, and Truebit-style incentive games, combined with four main network participants: submitters, solvers, verifiers, and whistleblowers. These four participants come together to collectively propose tasks that require ML work, perform training and proof generation, verify and replicate portions of proofs, and challenge potentially inaccurate outputs.

Choosing to build this platform on crypto rails seeks to eliminate the centralized counterparties and reduce the barrier-to-entry for new participants. Ideally through the use of blockchain technology, Gensyn will offer a platform that’s free of bias and low-cost for builders. The protocol’s workflow can be visualized through this very simple flow chart:

For Gensyn to actually succeed, there needs to be efficient coordination between the previously specified participants. They want to be the first layer one blockchain for trustless deep learning computation done entirely on-chain, coordinated by the most efficient marketplace between its submitters and solvers. 

Prodia

The AI companies that are more household names today raised money at a time when investor perception was quite different than it is today. Back then, there was an idea that AI would replace all the industries other than creatives, because of course creativity was an innately human and taste-dependent ideal. Incidentally, progress made from teams like Midjourney (text-to-image) and Runway (text-to-video) have opened up a lot of eyes to the capabilities of generative art and creative platforms built on advanced ML. The quality of images and video have improved at a breathtaking pace, with recent Midjourney image generations becoming increasingly difficult to discern from actual photos. Prodia is a platform that offers dynamic image generation services through its unique API. The network boasts over 10,000 GPUs, with a total of 400 million images generated and two second generation times for its users. 

Marketing itself to both solo developers and businesses, Prodia’s network of operators contributes over 12,000 hours of compute daily for almost any type of image generation needed. Prodia has been able to build out an infrastructure in an attempt to maximize earnings for its operators, letting them earn up to 10-15x more per month when benchmarked against NiceHash.

The main draw of Prodia for consumers is 50-90% cheaper inference prices – if you need to generate tens of thousands of images, Prodia offers competitive prices across Stable Diffusion’s 1.5, XL, and 3.0 models. There isn’t too much worth covering on Prodia, other than to point out they’re a revenue-generating business built on DePIN rails. Midjourney continues to be the gold standard for image generation, and with its monthly fee subscription model it will be worth watching how Prodia looks to compete. For the most part, AI-generated imagery is becoming increasingly indistinguishable from actual images and should present an opportunity for Prodia to lean into different types of markets. Their docs are a good source of information if you’re trying to learn more about how they charge for services or use the product for your own business. 

Ritual

Ritual is one more software-focused decentralized AI protocol that wants to combine the best of both blockchain and AI, describing itself as a sovereign, execution layer for decentralized AI. Their genesis stemmed from structural issues across the traditional AI landscape like oligopolistic organizations, high compute costs, centralized APIs across large AI labs, and a lack of verifiability of model outputs/structures.

Their solution is Ritual Infernet, the first iteration of the network that brings AI to on-chain apps through “a modular suite of execution layers” that lets anything on a blockchain utilize Ritual as a coprocessor. The Infernet architecture is a bit complex, using DA layers, routers, sets of nodes and storage components. 

This isn’t easy to grasp from the graphic alone, but the TLDR is that Ritual enables blockchains to more easily connect with AI through smart contract requests, utilization of off-chain computation, on-chain verification of model outputs, and seamless integration into dApps. If there’s a general trend of more data and users moving on-chain, there’s a world where something like Ritual is able to facilitate this shift. There’s probably some overlap here with the open-world discourse many others have described in more detail – specifically teams like Parallel or Wayfinder.

Infernet’s model of plug-and-play infrastructure is a nice advantage as they can opt in or out of existing and future tech without breaking the system. The platform is accessible today through the Infernet SDK and the team has partnered with another familiar name in Nillion and its MPC tech.

We always like to see teams actually ideate on how their technology can be used and Ritual does so with a list of potential startups that can be built using their tech stack:

  • Multi-agent Transaction Frameworks
  • ML-enabled Lending
  • ML-enabled AMMs
  • Memecoin + ML-enabled tech

The ultimate end goal for Ritual is to operate as a base layer for Decentralized AI, and assuming they’re able to provide infrastructure that’s both useful and simple to integrate, they can act as a funnel for moving some AI on-chain.

Grass

The last DeAI-related name we’ll discuss here is Grass, a protocol prioritizing data.

Grass hopes to enable individuals to take control of their data and contribute to the next generation of AI models. Participants allow Grass to monetize their unused bandwidth which is sold to AI labs for model training or other research purposes. In order to continue scaling LLMs beyond today’s capabilities, there’s a massive need for more unique data. Harvesting the existing internet is one thing, but AI labs are already pushing the limits of what’s possible, exploring alternatives like synthetic data and other solutions to get access to more unique data.

Grass’ solution is simple: users download the Grass extension, create an account, and run it in the background while they go about their business online. Thus far it’s brought on over two million contributors to the network. Grass was an early-mover in this category with a simple product, launching prior to the heightened significance of data for LLMs.

The architecture is again, overly complex in our view – Grass wants to function as an L2 data rollup which will serve as the base layer for decentralized web scraping, data collection, and model fine-tuning. It’s managed through a network of validators and nodes (users running Grass on their devices), which communicate with web servers and routers, underpinned through a ZK processor for verification prior to training.

The use of Grass’ distributed node and validator infrastructure makes it resistant to rate-limiting on a singular IP address. Outside of DePIN, there would be issues around web scraping as you couldn’t run all of this on one device, limiting the network’s scalability. In theory, by splitting up the work Grass can send out retrieval requests across the entire network, achieving some level of parallel processing.

Grass primarily manages two types of traffic: partially-encrypted traffic (PET) and fully-encrypted traffic (FET). The use of PET allows Grass to re-encrypt previously encrypted data for better quality, integrity, and bandwidth confirmations. FET lets Grass ensure more privacy for its users at the cost of some performance. The nodes are in charge of relaying this traffic back and forth between the validators, routers, and web servers — all of this happens in the background while users earn Grass points with no additional friction on their end.

It’s a model (chrome extension) that raises at least a few questions, namely:

  • How defensible is a chrome extension really?
  • How does Grass continue its rate of data collection and scale as users join the network?
  • Is it realistic to actually train this way or is the better approach becoming a crypto-version of Honey?
  • If this data is just mined while users passively browse the internet on their devices, how valuable is it compared to more robust data that’s already been fed to every open or closed source LLM?

The Decentralized AI space is still in an extremely tough position as most machine learning developers are more inclined to simply build with traditional tech rails. The opportunity cost of switching or experimenting with crypto-based platforms are still high, but credible teams will be at a huge advantage. 

We believe there is a massive role for crypto to play as these market dynamics play out – the leaders of the largest companies in the world have an insatiable appetite for gaining any advantage they can in this space. Many of them openly admit they view it as existential to their existence. By no means does this guarantee we’ll see massive companies built in crypto x AI, but the ground is fertile and for teams who uniquely understand both disciplines, the prize is sizable.

Data Capture and Management

Data is a broad term.

Everything we do these days creates some form of data. Healthcare companies specialize in the collection, maintenance, and analysis of patient-specific data. Social media platforms want user data that unlocks insights around platform behaviors and unique interactions. Marketing agencies want data pertinent to their ad campaigns and how well consumers are converting. There’s more data being collected and uploaded to the internet than ever before, and there’s no reason to expect this trend to slow.

In the context of DePIN, it’s easy to see the connection between data and crypto: blockchains are one of the best sources of data as they’re immutable and persist as long as a network remains properly validated. This is also a fairly common pitch from founders – the collection of data that will be monetized because it’s unique or special for some reason. Oftentimes though, there’s a misunderstanding of how traditional markets for various types of data actually function. The world of data DePIN is still underdeveloped, but there are four core categories that are particularly interesting in our view: content delivery networks, mapping, positioning, and climate/weather-specific data. The degree to which specialized hardware is involved varies here, but in our view, the most defensible DePIN networks are those that tackle the difficult physical world challenges. 

The previous section danced around the importance of data, but this section will explore the intricacies around data collection and the market for different forms of data and add context to how valuable it actually is in practice. A simple explanation from Messari’s 2023 State of DePIN report asserted that data is most valuable when:

  • There are many buyers of it
  • These buyers make lots of money from it
  • Better data is the limiting factor to them making more money

Interestingly enough, we often find that crypto teams have a very loose understanding of how valuable the types of data they collect really are. Understanding from just the 30,000 foot level can lead teams down dead-end paths if they don’t acutely appreciate who the buyers are and what moves the needle for them. There are all types of anecdotes about how hedge funds pay for thermal activity data to understand spikes in activity or satellite imagery data to understand how quickly developing countries are growing. But these are extremely localized, specific and actionable time-series data sets, and without an appreciation for who the end customer is, too often we see a collect-data-for-the-sake-of-collecting-data approach. 

So what about the demand side for this? Who is actually purchasing this data and why?

  • Buyers of mapping & geospatial include: Navigation and ADAS, Municipalities & private firms involved in urban development, Transportation & Logistics, Real estate and insurance, hedge funds and PE, Agriculture, Defense, AR and VR
  • Buyers of positioning include: Telecom, Retail, Agriculture, surveyors and construction, AR and VR, drones and robotics, Aviation, Navigation and ADAS, Automotive, Space Agencies
  • Buyers of climate include: Insurance and loan underwriters, Energy and utilities, Governments and nonprofits, construction and real estate, Agriculture, Aviation

Mapping

We’re all familiar with Google Earth and Apple Maps, as well as their methods of collecting data. Cars drive around for more than twelve hours a day, across the globe in order to take snapshots of every street, sidewalk, and home that’s out there. Google Street View began as an idea in 2007 to capture the world and import this imagery into an app to help everyone better navigate the world. Google has since taken over 220 billion images at the time of this report in 2022, while Apple Maps’ “Look Around” has expanded into aerial and satellite imagery.

Hivemapper

We’re also all familiar with the DePIN equivalent of this, Hivemapper, founded in 2022 to create a decentralized approach to location mapping. The team saw the growing demand for mapping services thanks to over 1.5 billion vehicles that now require mapping software, business demand for mapping APIs, and the billions of users that use Google or Apple’s map functionality every single day.

The thesis is quite simple: it’s expensive and time-consuming to consistently update Google maps. The data also becomes stale quickly because Google only periodically updates previously mapped streets. Hivemapper’s approach is to sell dash cams to network participants who would otherwise be driving anyway, and collect imagery in return for tokens.

One of the reasons Hivemapper stands apart from other, more centralized services (beyond the frequency of data refresh) is because it’s often cumbersome for Google or Apple to map less densely populated areas themselves. Hivemapper is less focused on providing the most comprehensive imagery of high-traffic locations than it is mapping the entire globe. It’s also attempting to differentiate on the pace and freshness of the data it has. The average age of any Google maps or satellite imagery is 1-3 years old, which seems quite stale in 2024. As of today, Hivemapper has mapped over 14.71 million unique kilometers and over 290 million total kilometers.

Ok but then the question becomes, who might want this data and is it valuable?

We’ve noted that hedge funds are always looking for these types of data edges, but the demand-side can extend to local governments, insurance companies, and logistics companies. If Hivemapper has better coverage in parts of Middle America, it’s likely they could sell this to logistics companies trying to get up-to-date information as to what roads are better for their shipping routes or what areas might have higher or lower traffic than previous imagery indicates. Insurance companies might want to use detailed, current imagery to assess properties, evaluate growth in an area, or examine whether or not a location needs reassessment. There are obviously other avenues to monetize but this is the general approach many of these data collection networks are taking.

Hivemapper’s approach is unique because they also allow for users to report changes to roads, infrastructure, or other anomalies that might need tagging. Users can participate in network governance by voting on Map Improvement Proposals (MIPs) and use their influence to gauge what Hivemapper might need to prioritize next. It’s probably the next most well-known DePIN project (behind Helium) and while the pace of mapping has been impressive, it remains to be seen how the demand side crystallizes.

Spexi 

Those who spend time in DePIN will know Spexi, which uses drones to gather the world’s most high-quality imagery through high-resolution cameras. Spexi positions itself as the first “fly-to-earn” (F2E) network that wants to make it easier for organizations to access high-quality imagery to “prepare for disasters, enable smart cities, remotely inspect infrastructure, monitor natural resources”.

Satellites, planes, and drones are three types of technology that are capable of capturing aerial imagery. This is an example of an area where we think it’s easy to get lost in the collection phase – capturing a massive amount of data that’s ultimately not high-resolution enough for your demand-side is a real risk. Spexi highlights in their docs that the best available” commercial satellite imagery is only collected at a resolution of 30 cm per pixel - here’s an image to show just how much of a difference the image quality makes:

The opportunity for Spexi to prioritize higher coverage, higher resolution imagery is unique. Drones are becoming increasingly inexpensive, which as you may expect leads to explosive growth – it’s easier than ever to purchase an affordable, consumer-grade drone. Additionally, Spexi’s docs claim that just 1% of the Earth has been captured by drones, mostly due to a lack of standards and previously expensive hardware.

Spexi’s business model is quite simple. They established a standardized unit of account called Spexigons, which are spatial hexagons used to account for 22 acres of Earth. In order to ensure proper management of these drones, “each Spexigon contains flight plan information that the drones use (when paired with the Spexi mobile app) to ensure they fly at the correct height, speed, and location to capture the best imagery possible.”

By creating this more uniform model of data collection standards combined with an incentive model, Spexi hopes to capture the same imagery of satellites while maintaining higher resolution from more in-demand areas of capture. Their fly-to-earn model incentivizes brief, automated flight missions to capture data as specified on the Spexigon grid. The platform is still in testnet but they want to ensure long-term sustainability with a native token through utilities like “reserving Spexigons for preferential capture via a staking mechanism.”

The platform has plans to fully vet drone operators, with recommendations of at least ten drone flights prior to undergoing Spexi-driven capture flights. The FAA actually requires you to become a certified remote pilot, and classes/courses are regularly offered across the US at nearly every major university. Drones might not be viewed as more than hobbyist toys today, but the industry is growing and it’s likely their utility will be implemented into areas requiring frequent, detailed imagery.

Positioning

Geospatial positioning and reference data markets are less widely understood by most of the crypto market. Advancements in technology, increased adoption and the rising importance of location-based services have led to significant growth in this category over the past decade. All in all, the global geospatial analytics market is likely to top $100bn in the next few years and with the adoption of autonomous vehicles, precision agriculture, space exploration, smart city infrastructure and a handful of other important sectors (defense and security most notable) it’s likely this importance only grows quicker.

Autonomous vehicles – rely heavily on precise geospatial positioning data for navigation & safety

Agriculture – precision agriculture uses geospatial data to optimize farming practices, increase yield and reduce waste

Space exploration – satellite deployments are becoming cheaper leading to far more low-earth orbit satellites susceptible to collision

Smart cities – urban planning, infrastructure development and traffic management will rely heavily on this type of positioning data

Defense & security – goes without saying that surveillance, reconnaissance and targeting will drive ongoing investment to this type of positioning data capture

Today there are satellite-based systems for collection (GNSS satellites), ground-based systems (ground stations with fixed reference networks), aerial & drone-based systems (drones equipped with cameras and LiDAR sensors) and crowdsourced data (mostly mobile & IoT devices). As you might expect, crypto can uniquely enable a hybrid approach to these different types of existing systems.

Onocoy, Geodnet, and Foam are the most recognizable names in this category today. Geospatial data is “time-based data that is related to a specific location on the Earth’s surface” that combines location information, attribute information, and temporal information. This includes specific coordinates, characteristics of an object, and the timespan in which these conditions exist, though geospatial data can also be characterized as static or dynamic. Geospatial analytics are used to add relevant data to these collections of information, while geospatial information systems involve the “physical mapping of data within a visual representation.” Just as we know that LLMs improve as more context is added, the same is true for geospatial analytics.

Geodnet

Geodnet is a DePIN protocol that’s attempting to build the world’s largest real-time kinematics (RTK) network to achieve a 100x improvement against GPS technology. The platform uses sensor tech in devices like cameras, LiDAR, and IMUs (internal measurement units) to power their three core products: Geodnet satellite miners, RTK services, and the collection of raw GNSS data.

Before covering Geodnet specifically, a bit of context here. LiDAR is a piece of remote sensing technology used in autonomous vehicles, meteorological models, agricultural studies, and most other industries requiring precise analysis of physical environments. RTK is the integration of manual surveying to account for any errors made in satellite imagery data collection; it involves highly detailed analysis of signals and carrier waves to achieve centimeter-level accuracy. This is most commonly used in areas like land surveying, hydrographic surveying, and unmanned aerial vehicle navigation.

Geodnet’s platform uses reference stations called satellite miners to collect signals from the Global Navigation Satellite Systems (GNSS). These miners deliver RTK correction data and send it through the Geodnet network, passing it to Rovers, which are devices with a GNSS receiver (cars, drones, etc.). Network participants must purchase a miner, connect it, and manage the upload of 20-40 gigabytes of data each month. There are over eleven distributors that offer purchase of Geodnet miners, with prices ranging from $500-$700 dollars. The network’s coverage is fairly expansive, with concentrations in Europe, North America, and Australia currently.

Geodnet has been able to attract thousands of “triple-frequency, full-constellation GNSS reference stations” since 2022, rivaling the speeds of traditional corporations that would need to install all of this infrastructure themselves. 

DePINscan shows an average daily earnings per miner of roughly $4.30, which is a payback period of ~3-4 months. There are over 8,000 miners within the network, with a range of 136 countries and over 3,700 cities across the globe. Miners are rewarded with GEOD tokens which are eligible for a token burn from data collectors/providers. The sustainability of some of these early DePIN token models remain unproven, but we’re seeing more and more projects take a directionally correct approach to competing with incumbents.

Onocoy

Onocoy is actually quite similar to Geodnet – they’re also building out RTK infrastructure for the mass adoption of high-precision positioning services. They specifically cite issues like high costs for reference station infrastructure, regional fragmentation of services, lack of business models applicable to mass markets, and other constraints that have limited the spread of RTK infrastructure as needed.

Onocoy’s approach involves separating reference station operations from the provision of correctional services; this allows for a significant reduction of CapEX in reference station deployment, while opening the door to deploy anywhere in the world. Each device is equipped with a wallet that communicates with Onocoy validators. Once the precise data is acquired, validators determine the quality and availability of submitted data to construct reward ratios (in ONO tokens), incentivizing providers to monitor their outputs more closely. The two main services provided are permanent and nearest, with each targeting different data collection methods (scientific versus larger-scale).

Foam

A final example here is Foam, which provides the tools and infrastructure to enable a crowdsourced map and decentralized location services – all without the use of satellites. Foam’s approach involves terrestrial radios that make up a fault-tolerant system that anyone can participate in. Where Onocoy and Geodnet use more sophisticated infrastructure to get extremely accurate data (down to the centimeter), Foam is interested in onboarding as many participants as possible to build a resilient layer without a single point-of-failure.

Built on Ethereum and governed by the FOAM token, the network consists of two major services: Foam Map and Foam Location. The former is a “community-verified” registry of crowdsourced places - this could be anything from a new restaurant that’s occupying a previously uninhabited building or a residential apartment building with limited information about it online. Foam lets users contribute their findings - called points-of-interest (or POIs) - to the network as the platform’s goal is more personalized and entertaining maps.

The model of verification is pretty straightforward. Challengers post a challenge if they believe an attestation to be incorrect, which is then debated upon by the Foam community. From here, data is organized and adjusted through the spend of FOAM tokens.

Traditional mapping services like Google Earth or Apple Maps can take years to update their imagery and information. With Foam, it’s easy to accommodate for the changing physical environment and make more frequent adjustments. To join the network, all it requires is a Chromium-supported browser, web3 wallet, FOAM tokens, and ETH to pay for transaction fees – users are then able to start mapping as they see fit.

Foam’s Location product is the component that deals with physical infrastructure, aka the Foam radio network. Service providers set up and maintain radios, which are then used to offer location services through time synchronization, an alternative to GPS mapping which is more centralized. Foam saw three major problems with existing spacial protocols: location encoding, user experience, and verification. There’s a lack of established standards for embedding locations, physical addresses, or coordinates in smart contracts; there’s also no current methodology for actually verifying this data on-chain, let alone in a scalable way. “According to the United Nations, 70% of the world is unaddressed, including more than half of the world’s sprawling urban developments.”

There have been other attempts to create alternative addressing systems to “increase human memorability, verifiability, and machine readability” with the best attempts being What3word and Open Location Code – both of these systems ultimately failed as it was difficult to introduce sustainable economic incentives that fostered growth of the initiative.

Foam also saw issues in location verification, mainly due to a lack of backup to GPS data, as the system relies on an extremely centralized set of just 31 satellites that are being stretched beyond their limits. The Global Navigation Satellite System is responsible for managing synchronized time stamps for the transmission of electricity to the grid, facilitating location data that corresponds to transactions, and handling automated trades of the stock exchange – all of this on top of its work as a global navigation system for billions everyday.

With regard to verification, there is no encryption with the civil GPS system and no proof-of-origin to prevent fraud. Foam’s solution to these problems comes with crypto-spatial coordinates (CSCs), the first open location standard on Ethereum. These CSCs are Ethereum smart contract addresses with corresponding physical addresses, with the geohash standard used for simplicity. This CSC standard can be used to make a claim or reference to any location in the physical world, allowing for smart contract activities to “take place on a spatial dimension.”

Foam achieves this through its Zone Anchors (ZAs), remote-controlled radio nodes that are strategically placed to transceive LoRa packets between locations, converting this information into blockchain-receivable data. Foam’s Location protocol requires four ZAs to sync and form a zone, resulting in the creation of a Foam zone that’s capable of location tracking on an indefinite basis.

In the past three months, Foam has announced a few major updates, including zone expansion and the capacity to broadcast Ethereum transactions through radio signals. As Foam has grown, they’ve expanded into familiar crypto territory in Williamsburg.

Climate & Weather

Meteorology and crypto don’t exactly seem like a match made in heaven, but that’s exactly why we find it compelling.

The climate and weather DePIN sector is pretty sparse today but there are a handful of companies like dClimate, PlanetWatch, and WeatherXM. We like to spend time in areas that most people don’t seem to care about yet and this certainly fits the bill. The core premise of “climate and weather” DePIN is twofold:

  1. If you accumulate enough weather data over time from an extensive distributed network, you can create more accurate predictive models, outcompeting the weather delivery system we have today.
  2. Distributed networks allow for much more localized and real-time information, which is particularly advantageous during periods of extreme weather

Here’s a brief, but informative explainer of how the weather forecasting system works today and how complex it really is. 

Almost anything can be used to contribute to weather data, and we get this data from a variety of sources, including satellites, weather balloons, aircraft, weather stations, radar, ships, and lightning detection systems. The World Meteorological Organization (WMO) is in charge of absorbing all of this data and distributing it amongst national weather services, global climate centers, and private weather companies. This includes recognizable names like AccuWeather and The Weather Company, but also some more obscure organizations like the NOAA and the ECMWF.

This is then aggregated and distributed amongst different types of weather collecting machines, like Automated Surface Observing Systems (ASOs), Cooperative Observer Programs (COPs) and Mesonets. These fulfill duties ranging from simple or more niche data collection to broad networks used to examine high-resolution geographic data. ASOs are automated weather stations that are capable of collecting data for temperature, precipitation, cloud cover, wind speed, and more.

WeatherXM

WeatherXM is a community-powered weather network that makes the process of acquiring and distributing weather data simpler and more rewarding for all. Their network is led by participants that purchase and manage WeatherXM’s weather stations, with four distinct models.

Network participants deploy weather stations and earn tokens based on their quality of data (QoD), with WeatherXM using a unique QoD algorithm to ensure the data submitted is accurate and valuable. These operators collect data related to a host of different weather model elements including temperature, UV index information, rainfall and humidity.

Their explorer shows there are over 5,285 active weather stations deployed in large concentrations across Europe and North America, with the option to deploy WiFi, Helium/LoRaWAN, or 4G-LTE stations.  The decision to choose one of the three distinct station setups is dependent on where you live and what type of infrastructure makes the most sense.

If you’re in a city, it’s probably best to manage a WiFi station as there’s more than likely a consistent power source and easier transmission opportunities due to the density. Helium/LoRaWAN stations are a better fit for rural areas without consistent WiFi or cellular network infrastructure. 4G-LTE stations are suitable for almost any location but require higher operational costs and more attention.

WeatherXM’s native token is rewarded based on a revised distribution system. It’s not entirely straightforward but there’s a significant focus on a) frequency of contribution, b) quality of data, c) verification of data, and d) location of station. Providers should earn more WXM if they’re collecting consistent, high-quality data in a location that’s not being serviced as heavily as densely populated cities.

dClimate

Taking a different approach from WeatherXM, dClimate is focused on democratizing the vast amount of weather data already in existence, turning it into more tangible outputs and allowing more individual access to it. The platform consists of three major products: dClimate’s data marketplace, CYCLOPS’ monitoring, and Aegis’ climate risk management.

The dClimate whitepaper is an excellent resource on the existing problems plaguing the traditional weather data system. Raw climate data is traditionally produced by academic research projects and government partnerships. Unfortunately this leads to rather siloed access, something dClimate wants to disrupt by expanding the terrestrial weather station network to get better coverage in more important parts of the globe. One of the main issues preventing this is that there hasn’t been a feasible way for hobbyist’s to set up their own weather monitoring system to feed data into aggregation platforms. These data services are only compatible with established pipelines and cut out access to the average individual.

By focusing on innovation in two key areas – cleaned data and natural catastrophe simulations – dClimate can create a wedge into a smaller part of the weather data stack. Their solution is a single, decentralized ingestion and distribution system that can be implemented for all climate data, where users submit data and let dClimate do the work to store and distribute this data. The idea is that through a more efficient collection of this hyper-localized data, dClimate can force incumbents and their walled systems to shift towards more decentralized data collection methods.

The whitepaper defines two core participants in the dClimate ecosystem – data consumers and providers. Providers are incentivized to use dClimate through access to a unique sales channel. If you are one of the previously referenced hobbyists, you have a slim chance at submitting data to a meaningful aggregation pipeline. With dClimate, you’re immediately able to access a dedicated funnel of consumers looking for unique data. Another benefit of dClimate’s model is it allows providers to focus less on distribution as they’re already connected with potential buyers.

Consumers are just the demand side – weather companies looking for better weather forecasting, farmers looking for crop models, NGOs looking for data. dClimate is facilitating activity, aligning consumers and providers in a transparent system.

Storage of dClimate’s data is maintained through IPFS, with providers submitting a smart contract on-chain as an attestation of their identity and willingness to provide this data. Upon purchase, a consumer must submit a stablecoin payment with an attached IPFS hash to the provider stating exactly what data they’d like to purchase, with a Chainlink Oracle confirming this transaction entirely on-chain. 

The dClimate platform also includes a few other sub-products that assist with the marketplace and data collection processes. Cyclops is a digital measurement, reporting, and verifying platform that puts all of the environmental data relevant to weather under one roof. Cyclops is designed as a tool for natural capital monitoring, with use cases extending to everything from tracking deforestation, monitoring forest health, and tracking carbon data and credits. 

There’s also Aegis, which is a climate risk assessment tool that lets businesses examine detailed insights relevant to their weather operations. Aegis was designed to provide more detailed information for users on how climate change might affect their immediate environments. Users can input precise location coordinates and receive risk analyses.

These ancillary products seem to complement the main platform, though dClimate continues to face challenges around standardization, reliability and creating a wedge into legacy systems.

PlanetWatch

PlanetWatch is the last of the climate protocols we’ll briefly highlight, building out a platform for decentralized air quality monitoring to help combat air pollution. The correlation between economic development and air quality has long been studied and so long as we have economies advancing from developing to developed, we will have air pollution issues.

Air pollution alone causes over two million deaths a year in China and as you might expect, Asia is most affected globally. Still, the issue persists everywhere despite pushes from the US and Europe to transition to more clear energy sources. 

The PlanetWatch whitepaper describes the platform as an innovative way of incentivizing individuals to deploy sensors and expand local air quality monitoring in an effort to enable “Smart-cities-as-a-Service.” PlanetWatch specifically set out to solve the lack of hyperlocal air quality sensors to better inform individuals exactly how clean (or polluted) the air they breathe is. Interestingly enough, this is another area that feels primed for some sort of collaboration between the biohacking community and crypto – as we all become much more acutely aware of personalized health, understanding and improving air quality will be big business.

Poor air quality kills over seven million people a year, with 9 out of 10 individuals globally exposed to unfit air quality standards as set by the World Health Organization. Hundreds of billions of dollars are lost each year due to the externalities posed by poor air quality – easily deployable sensors that can be placed conveniently to collect and record sequential air quality data is the first step.

More recently, PlanetWatch announced that the Ambient Foundation would be taking over operation of the network and implementing new tokenomics. It’s pretty unclear what this actually means but the idea is novel enough that it’s worth including here. 

Ultimately, data collection is such a large umbrella term it doesn’t really mean much. Even the few specific areas we discussed here are just scratching the surface of this space and each has its own monetization challenges. From our perspective, it can’t be understated just how important understanding these business models from the traditional (non-DePIN) sense is. There are huge advantages to building these types of networks using a distributed model, but we think there’s also a lot of nuance beyond just hoovering up as much data as possible.

Some key takeaways from all of this:

  • Is there a world where this type of infrastructure competes with non-DePIN incumbents?
  • If so, which of these verticals is best positioned for growth over the next decade+?
  • Is there a market for more specific, localized climate data in the wake of global climate change?
  • How important are hardware advancements for each of these companies (cheaper drones, longer flight times, longer-reaching satellite comms, more specialized sensors)?
  • What are the emerging technologies outside of crypto that will enable some of these networks to thrive?

We actually think there’s also some interesting adjacent companies that can be built in this space. Toward the end of this report we specifically call out Disaster Response Networks, which in some respects will be ingesting real-time weather data as an input using similar hardware. 

Services

Candidly this is a bit of a catch-all for networks that don’t necessarily fit neatly anywhere else at the moment. There are a large number of services provided in our daily lives, though there’s usually a distinction between physical and virtual services. Incidentally, we saw some recent discourse around the capital-p P in DePIN. There’s certainly an argument that the more physical the network is, the more difficult it will be to reach escape velocity. The flipside is that those who do reach escape velocity will have much stronger moats compared to their virtual network counterparts. A common narrative is that software has zero marginal cost, but rarely do we talk about how this also affects competitive dynamics – building durable moats in an open-source software world becomes radically more difficult.

In the context of this report, a service is a protocol that’s designed to offer a more niche functionality while still existing under the general umbrella of DePIN. The protocols highlighted represent categories that might not be fully formed, ideas that seem slightly deranged, or some combination of the two. 

  • Dimo: general-purpose mobility platform connecting vehicles and drivers
  • PuffPaw: vape-to-earn using physical vapes to incentivize individuals to quit smoking
  • Heale: data for the logistics industry
  • Silencio: global network to measure noise pollution
  • Blackbird: enables users to network with restaurants and earn rewards
  • Shaga: DePIN for low-latency, high-performance cloud gaming

These are all very unique business models, but their alignment comes from the usage of crypto incentives to drive real-world behaviors. In no particular order let’s cover a couple of these here.

DIMO

Most should be familiar with Dimo, a platform powering connected car applications and services. Dimo believes that if technology is user-owned and open source, users should save money and receive better experiences from their apps and services. The Dimo protocol uses blockchain to establish a network of universal digital vehicle IDs, vehicle control, payments, and data transmission through the use of a few applications and devices designed to integrate into most types of vehicles. Drivers connect their cars and stream data in return for DIMO tokens, earning a share of the weekly DIMO issuance and earning greater rewards when app developers or data consumers pay more for their posted data.

The DIMO token is required for transacting on Dimo, where purchasers of data must pay for these services with DIMO tokens – a reasonable enough design. If hardware manufacturers want to make new devices that connect drivers to the Dimo network, these physical devices must be backed by DIMO tokens in order to properly integrate into the network.

Dimo’s growth has been impressive, with over 104,000 unique vehicle IDs minted across ~35,000 holders. These minted IDs are mostly tied to newer cars, with over half of the IDs belonging to 2020 car models or later.. Dimo’s thesis revolves around the idea that car data will become increasingly valuable and open up new unlocks for everything from a fully automated DMV to a smart parking garage that sends automated instructions to cars that park themselves.

Dimo’s end goal is to achieve a system of vehicle-related applications that are powered by the Dimo network, eventually connecting every car as well. Their blog has listed recent innovations like a new dashcam app that rewards you for every drive, improved vehicle refinancing, and a mobile mechanic service.

Blackbird

Blackbird is an application that rewards restaurant diners and hopes to power the restaurant economy of the future. The app’s premise is simple: the more you eat at your favorite restaurants, the more rewards you’re eligible to earn, compounding over time and enabling more bespoke experiences. Blackbird’s goal is to facilitate connections between restaurants and guests, offering both of these parties a shared network and reward system.

Restaurant operators are facing more challenges now than ever before: 5% of topline revenue goes to credit card processing services, third party deliveries, and other services. Additionally, the role of technology has become even more important to restaurant owners, as the increase of digital sales from 2019 through 2024 increased from 8.9% to over 20.8%.

“For restaurant operators, it is time to accept the simple but daunting reality that economic sustainability can no longer be achieved through disciplined operations and legacy best practices.”

Blackbird’s Flypaper says that numerous issues plague existing restaurants’ loyalty programs, specifically misaligned interests, data fidelity, lack of control, and point-of-purchase liquidity. Blackbird’s solution to this is a four-pronged system of unique Guest Profiles that are meant to establish a point-of-contact between restaurants and their most loyal customers. This data includes a history of check-ins, guest contact information, wallet balance, and an expected estimate of a guest’s lifetime “value” score to a restaurant.

Blackbird’s vision is to eventually build a massive network of restaurants that can tap into a loyal customer base without going through traditional customer loyalty programs. By embracing technology and leaning into this earlier than others, the thesis for restaurant operators that choose Blackbird is they can prioritize spending time running the restaurant, creating valuable experiences, and delivering quality food for their customers. Globally, the customer loyalty program market is estimated to be around $11.7 billion – Blackbird makes this simpler and provides a unique experience that customers are more likely to embrace. 

Customers that use Blackbird are rewarded with FLY from restaurants in exchange for their business. Actions like a payment or a check-in trigger a FLY payment and reward users for consistently visiting these locations. The FLY token represents a standardized unit of account that restaurants can use as a scoring system. Customers can earn FLY tokens, spend it at restaurants, and cash in perks as they go about their traditional restaurant-going behavior. FLY emissions are worth paying closer attention to, as the protocol has distributed over 128.5 million FLY as of August 2024 – this has gone to both customers and restaurant operators.

The advantage of FLY over traditional customer loyalty scoring systems is that FLY is extendable beyond a singular restaurant – the points you earn can be used across the ecosystem of Blackbird-connected restaurants, leading to shared value that avoids isolation. Restaurants that might be skeptical of Blackbird can consider the network effects of opting into the system versus staying removed from the feedback loop. If you own a restaurant on a block where your competitors are connected with Blackbird, you’re potentially missing out on business as you aren’t offering generalized rewards to future customers. 

The network has seen significant success in New York City, as the city density means easier word of mouth awareness. Candidly, one of the main issues that might plague Blackbird is rewarding too much FLY initially – over-emitting tokens is not exactly an uncommon problem for many crypto companies to be fair.

Silencio

Silencio is creating a global network for reducing noise pollution, made possible with the help of a mobile app where users mine “noise coins” to log hyper-local noise data. Their litepaper cites a statistic that over 68% of the global population will come to live in cities by 2050. As you might expect, if you live in a city you are more likely to be subject to noise pollution, which the European Environmental Agency describes as the second most harmful pollution next to air pollution (kek). 

This might not make much sense if you’ve never experienced it, but the phenomenon is very real. Anecdotally, your first night sleeping in a large city is going to be filled with sirens late through the night, cars honking, and people bustling around on the street. In recent years we have become more obsessed with perfecting the quality of our nightly sleep, in order to improve our daily health and potentially prevent diseases, too. Assuming noise pollution is a deterrent standing in the way of a population that’s able to get a consistent good night’s rest, Silencio wants to enable this future by rewarding anyone who contributes to the network.

The user flow involves signing up for a free mobile app, logging your location data, and submitting your surrounding sound level (dBA) to Silencio in exchange for token rewards. Silencio takes in this data – which has never been collected at scale like this – and uses it to benefit any industry looking to prevent noise pollution or improve the wellbeing of individuals living in cities. Admittedly this does bring into question who exactly the demand side is for this specific application.

Silencio improves on previous methods of generating noise pollution maps, most notably through the use of smartphones instead of fixed environmental sensors. There isn’t a better device to utilize for capturing accurate and up-to-date mapping of noise pollution on a global scale. Silencio cites two research papers relevant to their tech stack: Murphy and King’s 2015 study on environmental noise measuring and a 2021 study concerning crowd-based data collection relevant to noise assessment. 

Their approach is to simply measure noise data through smartphone microphones and collect enough of it to minimize the margin of error for locations being measured. The idea is that even if this technology isn’t perfect, you can’t build this global data network without utilizing a globally adopted technology like smartphones. 

The coverage has been impressive so far, with most of India, Western Europe, and the United States having semi-accurate noise pollution data on a large scale. It remains to be seen whether or not Silencio can monetize this data

On a related note, we think there’s room here to build an Acoustic Sensor Network. We’ve shared the core premise below, but there’s far more detail on our thesis database.

PuffPaw

PuffPaw is a vape-to-earn project (yes, this is a real business) that wants to incentivize individuals to quit smoking through the introduction of token incentives and a physical vape device that monitors vaping habits. 

We know how harmful cigarettes and other smokable tobacco products are: over 480,000 individuals in the US alone die from smoking-related illness every year, with secondhand smoke contributing to over 41,000 deaths each year. The tobacco product market generates almost $1 trillion of revenues each year. That is a staggering amount of money for an industry that actively feeds off the health of its customers.

The team behind PuffPaw saw recent growth of non-tobacco, nicotine-based vapes and pouches that have managed to achieve significant growth – PuffPaw’s intended goal is to incentivize people to curb their addictions with marginally healthier alternatives.

The vape-to-earn mechanism is a means of both incentivizing the usage of non-tobacco vapes and as a method of collecting data around user vaping habits. PuffPaw’s solution is a physical device that can be purchased and actively rewards users for reducing their nicotine consumption – all of this is built around the introduction of PuffPaw’s physical vape devices.

Typical DePIN projects have a lifecycle built around initial investment, mining, and token reward distribution – this can limit future entrants as earlier participants might have earned increased token rewards and diluted future entrants. With PuffPaw, the system was designed to maintain the sustainability and manage both incentive dilution and user expectations. 

The project isn’t live yet but will be launching exclusively on Berachain mainnet. In an ideal world PuffPaw will onboard a new group of non-crypto users and introduce vape-to-earn as a gateway to crypto. 

Shaga

Shaga wants to redefine the cloud gaming landscape through its deployment of a decentralized network of idle PC compute that can power gaming infrastructure on a global scale, starting with a focus on web3 games. Looking to offer zero latency gameplay from anywhere in the world, Shaga is built on Solana and leverages a P2P architecture so anyone can participate in gaming without shelling out money for more performant devices. In a recent Twitter post, Shaga provided statistics on how many gamers face latency-related issues in their daily lives: 

  • 39% of gamers find latency issues to be their top frustration
  • 42% said latency issues stop them from playing as much as they’d like
  • 24% stop playing and quit to play something else
  • 20% experience frequent latency issues, even worse on mobile

We’ve seen the gaming industry become massive and estimates show 2022 revenues of over $347 billion, with over two-thirds of this coming from mobile gaming. This statistic could point to the idea that most individuals globally aren’t able to access gaming consoles or purpose-built gaming PCs. Professional gaming is usually limited to either PC experiences (League of Legends, CS:GO) or console experiences (Super Smash Bros) – there isn’t a market for professional mobile gaming as these are typically marketed towards a different user base. 

Shaga wants to break down the barriers of PC gaming access, enabling individuals to supply idle compute in exchange for rewards, while opening up the opportunity for individuals without the necessary hardware requirements to access more technically-demanding games. Shaga transforms these PCs into nodes which are capable of bypassing the need for centralized servers, reducing the distance data must travel in order to facilitate these gaming experiences. 

If Shaga is actually capable of delivering these gaming experiences for individuals without PCs, the opportunity is interesting. There’s been a lot of discussion concerning how web3 games can compete with the modern or legacy gaming industry’s tech stack, but this is a unique method of finding some sort of wedge. 

Heale

Heale is a unified API for logistics, connecting heterogenous logistics systems and tokenizing the data to create a decentralized “master record” of logistics for more efficiency at all layers of the stack. Their whitepaper highlights the $10.4 trillion global logistics industry and its consistent growth – everyone, everywhere in the world needs goods shipped to them. 

The process doesn’t only concern final goods, but the transportation and delivery of raw materials used to produce these goods. Heale expanded on some of the main issues facing the logistics industry from fully embracing technology of the 21st century:

  • Processing and administration costs are up to 20% of final transaction costs
  • 6% of invoices have billing errors resulting in over $455m of costs annually
  • 1.5% - 9% is charged to carriers for factoring invoices and providing access to working capital
  • 40 billion empty miles wasting 6 billion gallons of diesel, resulting in $28b of wasted resources

Heale is focusing on three principles driving their product design:

  • lack of data standardization
  • high switching costs
  • unclear logistics ROIs

Its platform is currently functioning as a custom L2 leveraging the EVM and Polygon CDK. 

Heale works by letting its users sign up and verify their identity on-chain, where they then create a point-of-contact for future business and transactions. This entity can then tap into the Heale network and perform transactions and submit deals in this interoperable system. Because all data sent to Heale is standardized and verified, these individuals can feel more comfortable transacting than they might in the real world. 

Heale aggregates data from TMSs (transportation management systems), ELDs (electronic logging devices), and IoT devices, which is then submitted to the blockchain and published for real-time usage. One of the benefits of Heale is that its product doesn’t require users to switch their traditional behaviors – Heale only improves upon how this data is used. 

Heale’s initial focus is to build out this easy-to-use API or SDK so shippers, brokers, carriers, and drivers can access a developer base capable of using this data. Users that submit high quality data are rewarded with HEALE tokens for their contributions to the network. The more data Heale collects over time, the better it can reward future users and build up this logistics map unburdened by siloed systems. Heale wants to get involved with every step of the logistics life cycle from pre-transit operations to post-transit payments. 

An ideal end state of Heale would be an extremely robust network with years of logistics data, extendable to almost any step of the logistics life cycle. Individuals working in this field could tap into the Heale network, find data around a niche method of transportation they might need to facilitate and use this to better understand their own unique business requirements. 

The platform is still relatively new, but Heale is probably the best example of crypto being used as a wedge in a traditionally slow-to-adapt industry. Similar to discussions around integrating new technology into the modern electric grid, Heale is providing a solution that doesn’t require a complete overhaul of our logistics industry to improve operational efficiency. 

All of these projects are building in extremely different, and mostly underexplored verticals. So while it’s not quite as easy to see where crypto fits in compared to a perps dex, we know that payments, verification, transparency and resource utilization are all core underpinnings of crypto technology. These DePIN projects showcase that an underappreciated use case of crypto is its ability to drive coordination and incentivize net-beneficial behaviors. Whether or not these projects turn into massive companies remains to be seen, but they are a few tentacles for crypto to reach out into the traditional world and solve real problems.

Where Do We Think DePIN Is Headed?

There’s so much more design space to explore here, especially as adjacent technologies rapidly improve – better, cheaper, smaller and more efficient hardware is coming.

This list below is certainly not exhaustive, but it’s a peek into how we think these types of networks will either overturn existing solutions or become category-creators for markets that are only just beginning to develop. Admittedly these are far less developed and more speculative in nature, but this is also what true venture capital is about. We’ve roughly categorized them across Public Goods, Bio & Healthcare, Materials and Sensors.

Disaster Response Network

The hidden costs of delayed response times during natural disasters is staggering. And while the most top-of-mind example is hurricane season in the Southeast, these types of disasters are felt everywhere. Slow response time to wildfires in California, flooding in the Midwest, tornadoes in the plains region all have immediate and second-order economic effects. The frequency of serious natural disasters is only increasing across the world and we’ve consistently seen the existing infrastructure to deal with them is archaic.

Direct costs are obvious and massive – every minute emergency services are delayed means increased direct costs to property damage, business continuity and healthcare. The indirect costs of insurance claims, economic output loss and in severe cases migration impact these regions long after the news cycle has moved on.

One solution is a decentralized network of connected devices that uses smart contracts to automatically trigger emergency responses based on sensor data with predefined thresholds. A blockchain-based platform could improve coordination around resource allocation and real-time data sharing – as crypto moves toward mobile, an embedded app for citizens to report emergencies, irregularities, offer aid and track response efforts at super localized levels could both speed up response times and allow for more precise targeting.

Distributed Robotics Training 

We’ve shared our views publicly on this and the space for decentralized robotics training is still quite small despite numerous advancements in robotics.

There are a handful of teams working on DePIN robotics projects: Mecka, XMAQUINA, KrangAI, and FrodoBots. Each of these focuses on a different part of the robotics stack, but distributed training mechanisms feel like the most rewarding and potentially useful vertical to spend time on. If there were a way to incentivize average individuals to record their daily lives and submit this to a project who aggregates that data for robotics training, this might be a feasible way of reducing the need for synthetic data generation or alternative training mechanisms.

If we’re discussing the concept of a general-purpose robot capable of interacting with humans or objects in the physical world, then this type of robot requires a large amount of equally generalized data for highly-specific interactions or tasks. This data isn’t easy to collect, as most development towards these robots has focused on highly specific functions. Tesla has recently started paying test participants up to $48 an hour for extended periods of robotics data collection. 

The robotics industry is at a crossroads, with inconclusive evidence as to whether or not scaling will benefit robotics development in the same way it did natural language processing and computer vision models. Whether or not this turns out to be true, a large amount of data needs to be collected so these models can be scaled up in similar fashion to LLM development in recent years. 

“Riding this wave means recognizing all the progress that’s happened because of large data and large models, and then developing algorithms, tools, datasets, etc. to take advantage of this progress. It also means leveraging large pre-trained models from vision and language that currently exist or will exist for robotics tasks.”

For what it’s worth – and if recent advancements from Disney are any indication – we’re still a long way from interacting with humanoid robots in our daily lives.

Water Quality Monitoring

With increased discussions as to just how present microplastics are in the human body, attention is shifting toward our water supply and what we put in our bodies everyday. Personalized health will be the norm soon enough. One way to achieve a decentralized water quality monitoring system could involve the deployment of consumer water filters for households. There’s been a recent trend in buying reverse osmosis water filtration systems to prevent ingesting heavy metals, pesticides, fluoride and pharmaceuticals. The idea has been discussed in greater detail here, though the introduction of blockchain tech makes it slightly more feasible given a previous lack of incentives. 

There’s a large opportunity for a DePIN project to create relatively cheap water filters that plug into a home’s plumbing, use pH sensors or turbidity sensors, collect contaminant data, and reward users with consistently high-quality water or consistently improved water quality data. The idea would be to incentivize users to a) live healthier lifestyles, beginning with water consumption and b) to gain valuable insights into local water quality outside of bureaucratic systems with no intent of improving or monitoring this water.

Collaborative Space Debris Tracking

The rapid increase in satellite launches over the last few years has led to exponential growth in space debris and congestion in Earth’s orbit (LEO). Companies like SpaceX and Amazon intend to launch thousands of satellites in an attempt to provide global internet coverage. At the same time, the miniaturization of satellites is significantly lowering the barrier to entry for space missions. With that comes an exponential increase in the probability of collisions. Any one single collision can mean 9 figures of lost satellite value and replacement costs.

Tracking the smallest debris (<10cm) requires very sensitive equipment, likely beyond a hobbyists budget today. However these costs will continue to shrink and there are already meaningful ways for amateurs to add value right now.

Radar tracking is most likely beyond hobbyist capabilities but optical tracking (for those with 8-14 inch aperture telescopes), laser ranging and radio tracking (SDR receiver, antenna & a computer for signal processing) are all effective and do-able today. Additional tailwinds here include:

  • More collaborative networks like the IASC
  • Advancements in consumer-grade telescopes & cameras
  • Open-source software development to make data processing more accessible

Decentralized Healthcare Platform – we’ve written extensively about this but the idea is worth reiterating.

While DeSci is still very much in its infancy, there hasn’t been significant progress towards introducing crypto incentives into the healthcare system. Biohacking is rapidly gaining popularity and individuals are becoming far more active in the management of their personal health. 

This system could take many forms:

  • Mobile app where individuals compete with friends on physical fitness challenges for rewards
  • Personal health assistant that financially rewards or punishes you (via smart contracts) depending on physical activity behaviors
  • A decentralized 23andMe that becomes the largest, most accurate, and verifiable ledger of individual genome data (with some help from ZK proofs)

Regardless of whether or not this is explored in the short term, there’s a real need for decentralization in the healthcare industry. Epic Systems holds a pseudo monopoly thanks to its 36% market share amongst U.S. hospitals currently using its electronic health record software. Outside of the software powering all of this, it’s difficult to gain insight into clinical trial results, pharmaceutical drug supply chain information, or access to patient data. 

A decentralized healthcare network isn’t just useful when confined to thought experiments like this, it’s valuable to humanity. If we were given access to more of the information about ourselves and understood more about how this data gets used within the healthcare industry, the path to breakthrough scientific discovery becomes clearer too.

Tokenized Genetic Diversity Preservation

Traditional centralized biodiversity conservation efforts don’t have a breadth advantage when it comes to discovering previously unknown or extinct genetic variants. One method of expanding this reach to compress the time to discovery, facilitate more targeted conservation efforts and potentially enable groundbreaking de-extinction projects in the future would be leveraging a distributed network of citizen scientists, naturalists and adventurers. But what might this look like?

Shared Dream Mapping

We continue to think that the 8 hours of a typical day earmarked for sleep will see rapid technology advances. The first crude evidence of this has shown up with things like EightSleep, sleep-trackers and an overall recognition that maximizing the quality of our sleep is important. But as brain monitoring and neuromodulation technology improves, what we do in our sleep will become fertile ground for innovation. We highlighted Shared Virtual Dreams Using Connected Neural Interfaces in our Crypto Future piece but data mapping is another interesting place to explore here.

The higher-level approach would include incentivizing users to securely record and privately share their dream data. There’s a lot of research here that we will spare you but broad-based distribution for individuals sharing this data with sleep researchers, psychologists and lucid dreaming experts is compelling. Obviously given the sensitivity this is a far more complex design space than some of the other flavors of DePIN, but it’s not difficult to imagine the complementary fit for token incentives, a motivated and experimental initial user base and a difficult-data-to-collect problem.

Closing thoughts

There’s a lot of work being done in DePIN, and it remains one of the most credible long-term sustainable areas of investment in crypto. Despite all the progress, there are still endless problem areas that can be experimented with using distributed infrastructure networks, be that physical or virtual. 

Even with all of these companies building out specialized DePIN networks and doing valuable work, there are crypto-native issues that can’t be avoided. Top of mind is the question of how do you effectively create rewarding tokenomics that are fair, sustainable, and value-producing? Many of these projects have live tokens, but not many of these have deep liquidity or visible demand. That’s to be expected at this stage. But the most interesting teams in our view are those that either deeply understand where the demand-side is and how they can solve their problems in practice, or those who are building on a longer-term thesis toward something not widely appreciated today.

Helium is acting as a real poster-child for DePIN, pushing into the traditional world and reaching a real sense of escape velocity. Their success is a testament to what it actually takes to build a valuable network – a team with deep expertise in their field, iterating and pivoting as their assumptions are tested and validated (or invalidated), and shipping real products. This sounds obvious in hindsight, but we would still like to see more teams in crypto – DePIN included – who are sourcing real customers and better understanding their needs as early as possible.

This report was meant to both cover the wider DePIN space, touch on some of the existing networks, and surface networks we think should be built – all while sharing our own internal views on some of the competitive dynamics and company-building steps we think matter. Hopefully you enjoyed the breadth and depth, even if it took some time to work through the whole piece. The idea wasn’t to overwhelm you with every little detail surrounding DePIN, but to paint a picture of how the space currently looks and why we think it’s attractive moving forward. Of course we left a bunch of projects out, probably got some things wrong and have left a ton of unanswered questions open. So consider this a blanket mea culpa in advance. 

The sector has plenty of challenges to overcome, and very few teams have reached any semblance of PMF, but we remain optimistic and are actively investing in teams building in the broader DePIN space. Disrupting traditional finance or improving global payment rails are two of the more widely accepted future use cases for crypto, while we argue there’s room for a third in DePIN.

Oftentimes in crypto there is a bias toward cynicism – “show me the DePIN coins that are worth owning” – but this is short-sighted and fails to recognize that most startups fail. Crypto is not an established field like public equities so asking for a “DePIN basket” the same way you would look for exposure to Industrials or Consumer Staples is nonsensical. This is the equivalent of short-term narrative trading startup equity in our view.   

Most DePIN networks will never realize their aspirations or ambitions, just as most startups don’t. But we continue to believe the size of the problems these types of infrastructure networks can solve are so large that the best teams building here are undeniably worth supporting and partnering with.

This content is for informational purposes only, and should not be relied upon as the basis for an investment decision — none of this is investment, legal, business, tax or any other kind of advice. Compound may be investors in some of the companies/protocols mentioned.

you may also like