Artificial Intelligence

Our Investment in Modular

Building for a Unified, High-Performance Computing Future
Our Investment in Modular
#
min read

Since the early days of Bell Labs and Fairchild, trillions of dollars have been spent to pollinate global technology infrastructure. With the benefit of 20/20 hindsight, it is now clear that this investment enabled society to collectively build something novel - a digital central nervous system for the entire world. 

The evolution of this central nervous system compelled specialization in underlying hardware; as Hennessy and Patterson noted in their 2017 Turing Award lecture, an era of domain-specific hardware architectures has and continues to propel computing to new heights. Though, in the growing wake of this trend, low-level software has not kept pace - technologists have in turn been faced with the challenges created by unparalleled heterogeneity in their underlying infrastructure.

Of course, this wasn’t always the case. In simpler times, when Grace Hopper was designing one of the first compilers, a “bug” in one’s computing environment still referred to the presence of physical insects in a relay panel. But we’ve come a long way since then, and the visions of pioneers such as Hopper, Turing, and Von Neumann have in many ways been fulfilled and surpassed in the AI renaissance in which we currently find ourselves.

Despite the recent strides made in AI, headlines persistently highlight the limitation of progress due to GPU shortages. Irrespective of supply chain constrictions, it is evident that infrastructure is operating materially below its full potential - due in large part to the patchwork of heterogeneous and incompatible compilation packages, libraries, and frameworks proliferating in the wild.

It was in discussions with the Modular team that the extent and difficulty of this problem was cogently elucidated in the context of an obsessive ambition to fix it, and why we’re excited to announce today that General Catalyst is leading their $100M Series B financing.

Modular Co-Founders Chris Lattner and Tim Davis - through their work on compilers and frameworks such as LLVM, MLIR, Swift, and TensorFlow - have already touched billions of devices with their technology. We believe it’s through these experiences, and with the incredible team they’ve assembled to date, that they’ve developed not just another bandage, but rather a comprehensive, clean-sheet approach to unify infrastructure and maximize its performance in a manner that is actually pleasant for developers.

Modular’s performance compute engine, coupled with its pythonic language Mojo, we think wields significant promise to not only resolve lingering technical debt and facilitate higher (and more diverse) hardware utilization, but also narrow the gap in translating computing research to application and enable more cost-effective scaling during this next epoch (pun intended) of computing. In the midst of this AI era, all of the above, as applied to both CPUs and GPUs, are critical enablers of progress. 

In the Modular team we found an extraordinary group unanimously dedicated to solving hard problems with intentionality - both from a technological and societal perspective. The team’s thoughtful commitment to facilitating the responsible development of AI technologies resonates with our own principles of Responsible Innovation at General Catalyst.

We’re elated to partner with Chris, Tim, and the entire Modular team as they continue to build for a unified, high-performance computing future.

#
min read

Since the early days of Bell Labs and Fairchild, trillions of dollars have been spent to pollinate global technology infrastructure. With the benefit of 20/20 hindsight, it is now clear that this investment enabled society to collectively build something novel - a digital central nervous system for the entire world. 

The evolution of this central nervous system compelled specialization in underlying hardware; as Hennessy and Patterson noted in their 2017 Turing Award lecture, an era of domain-specific hardware architectures has and continues to propel computing to new heights. Though, in the growing wake of this trend, low-level software has not kept pace - technologists have in turn been faced with the challenges created by unparalleled heterogeneity in their underlying infrastructure.

Of course, this wasn’t always the case. In simpler times, when Grace Hopper was designing one of the first compilers, a “bug” in one’s computing environment still referred to the presence of physical insects in a relay panel. But we’ve come a long way since then, and the visions of pioneers such as Hopper, Turing, and Von Neumann have in many ways been fulfilled and surpassed in the AI renaissance in which we currently find ourselves.

Despite the recent strides made in AI, headlines persistently highlight the limitation of progress due to GPU shortages. Irrespective of supply chain constrictions, it is evident that infrastructure is operating materially below its full potential - due in large part to the patchwork of heterogeneous and incompatible compilation packages, libraries, and frameworks proliferating in the wild.

It was in discussions with the Modular team that the extent and difficulty of this problem was cogently elucidated in the context of an obsessive ambition to fix it, and why we’re excited to announce today that General Catalyst is leading their $100M Series B financing.

Modular Co-Founders Chris Lattner and Tim Davis - through their work on compilers and frameworks such as LLVM, MLIR, Swift, and TensorFlow - have already touched billions of devices with their technology. We believe it’s through these experiences, and with the incredible team they’ve assembled to date, that they’ve developed not just another bandage, but rather a comprehensive, clean-sheet approach to unify infrastructure and maximize its performance in a manner that is actually pleasant for developers.

Modular’s performance compute engine, coupled with its pythonic language Mojo, we think wields significant promise to not only resolve lingering technical debt and facilitate higher (and more diverse) hardware utilization, but also narrow the gap in translating computing research to application and enable more cost-effective scaling during this next epoch (pun intended) of computing. In the midst of this AI era, all of the above, as applied to both CPUs and GPUs, are critical enablers of progress. 

In the Modular team we found an extraordinary group unanimously dedicated to solving hard problems with intentionality - both from a technological and societal perspective. The team’s thoughtful commitment to facilitating the responsible development of AI technologies resonates with our own principles of Responsible Innovation at General Catalyst.

We’re elated to partner with Chris, Tim, and the entire Modular team as they continue to build for a unified, high-performance computing future.