Insights

Ben Sand on Strong Compute’s path to enterprise via hackathons, research grants and more

By

Rochelle Ritchie

July 15, 2024

Each day for Ben Sand, founder of Strong Compute, looks different. Especially since first founding the company in 2020. And especially since founding his previous company, Meta (no relation to Facebook), leading a worldwide accelerator for Telstra’s muru-D and having this network only strengthened by his time at Y Combinator.

Since inception, Strong Compute, backed by Sequoia India, Y Combinator, Folklore Ventures and more, is rapidly expanding their influence on optimising the Fortune 2000 through AI compute solutions, supported by an expanding engineering team across Australia and the United States. 

In conversation with Ben, this discussion explores how Strong Compute has built an enterprise offering, growth tools he’s using along the way, and his thoughts on the future state of AI infrastructure - including some common misconceptions which have since played out in the market. As well as some broad recommendations on what people should be reading and doing to keep up to date in the space.

Let’s get into it.

Let’s start easy, by explaining how Strong Compute provides large-scale leverage to the Fortune 2000.

“Strong Compute provides fast compute for machine learning use cases - allowing companies using our software to develop neural networks 10x faster (or more).

In terms of how we’d describe our product - we’re creating a management system for GPUs. 

In the same way people in an organisation need a management (or HR) function - Strong Compute offers the same service (onboarding, sourcing, vetting, performance management, task management and offboarding) for GPUs. This provides leading companies needing large scale computing power (all of the Fortune 2000) with leveraged access to the infrastructure they need. 

This is how we think about the business case - offering a complete solution to Fortune 2000 companies looking to deliver large scale efficacy.”

What makes Strong Compute unique?

“There’s probably a dozen or so companies worldwide with a smaller angle on this problem. The biggest difference between us and them is everything I’ve seen out there takes a stake in a partial solution, and starts from a few inherent assumptions (one thing which we encourage never doing in the team). One example of this is when people say - 'Data is heavy', and that 'it’s slow and expensive to move large amounts of data'.

A key benefit to being a first mover is that we’re able to break assumptions like this. We’ve made it cheap and fast to move large amounts of data, which means there are more ways for us to build systems for increased leverage. 

So, as one example, instead of trying to stand up high speed cache clusters next to permanently owned GPUs, we offer on demand GPU access wherever it’s needed.

This changes the framing of the conversation - we’re trying to move from it feeling like a construction project (which is how AI compute seems to be regarded by most today) into something far more flexible, fluid and on demand.”

How do you ensure you’re focused on the right thing at any one time?

“We start with a focus on our key customer, how we can unlock their biggest pain point and the value that can bring to the business as a whole. Understanding that allows us to see that the value for what we’re doing is in the enterprise space, it's in unlocking speed (and cost effectiveness) for the Fortune 2000. 

With enterprise as a key client, the path to adoption isn’t as short as it would be for mass market customers. This means the product roadmap and adoption curve has to be taken in sections.

At the time of this conversation, we’re currently engaged in maximising input data (feedback) for our hardware and systems - and building a community across Sydney and San Francisco. Over the next few weeks, this includes:

  • Offering closed and selective access to our platform for a Chess Hackathon we’re hosting in Sydney from the 3rd - 4th August, following a recent successful event in San Francisco.

  • We’re also currently accepting and reviewing applications for AI research grants, where we offer GPU access for builders solving within the remit of preventing data leakage, post transformer infrastructures and explainable AI. All development areas that will benefit the larger problem Strong Compute is solving for.

    We’ve awarded 4 grants of those already. We’re hoping to award at least 10. 

This means we’re getting a lot of feedback on how our hardware and systems work. We’re also collecting data on areas of research that people are finding interesting in AI, which acts as a reinforcement on the areas we’re developing in - ensuring our hypothesis matches market need early. 

This phase is vitally important for scale, as we’re testing not just the software itself, but also the documentation process.”

In the sea of 20 directions we could take this, why did you choose to focus on chess for the hackathon? 

“People like and understand chess. You might not be an expert, but you get the idea, and the idea of an AI playing chess has been around for a long time. 

So, I thought there’s value in being a little ‘old hat’ - as the idea is more alive than ever before. There are forums with tens of thousands of people who are really engaged in solving this. 

It’s a good, relatable way to stress test the system, and it’s quite contained. You don’t have to explain the use case to anyone, they just get it. So it means we can focus on cranking up and putting our system under pressure in a very specific context. 

Then the other side of that is the avant-garde research where our transformers and the right kind of models are going to get us there. This kind of research has to be a lot more fluid, which means it won't fit into a two day event. We're giving three month research grants to produce meaningful advancements that haven’t been created before.

These two growth components of the business allow the core team (currently expanding) to focus on the key elements of building the infrastructure layer to service prospective clients.”

What sort of people do you want on the team to help Strong Compute hyper scale? 

“We’re building something that hasn’t been built before. And that attracts a certain kind of person. The type of person that wants to build a monitoring and control system for all the world's compute. 

I think what makes a good Engineer for Strong Compute, for example, is someone very driven to do something new and is comfortable being in an area that is unfamiliar to them. They're happy to ask questions, they're inquisitive, they're not scared off by not being an expert in something and they're open to picking up new technologies. 

We'll code in common stuff like Python and JavaScript. We'll code in less common stuff like Rust or code in far less common stuff like Elixir. We use the right tool for the job. And we do a lot of systems and hardware engineering as well.

So we appreciate people who want to have a big picture view of things and unpack how it all fits together. If someone wants to develop a very narrow specialty then maybe they're a great user of the system, because that's what we're trying to support. But in terms of a developer of the system, we want people to have a pretty broad outlook and they want to see how the whole thing fits together.”

When thinking more broadly, what has the market believed to be true about AI that has since been proven wrong? Note: this question was answered in June, 2024.

“So I think that has been a big discovery. In terms of how the market's changed, I think everyone thought the creatives would be the last to be challenged by AI. And, it's more that they're the first. Language has been solved before math. Which I think probably also surprised a bunch of people.

Many were also surprised to the degree which next token prediction was effective in actually solving hard problems, which have been largely proven to be prediction based.

What we’re currently seeing is going into mixtures of experts. ChatGPT knows it can't do math, it can recognise math and so (currently) it spins up a programming environment; it turns math into a programming question, it executes the program and then it takes it back to the language model and then gives it to you.

Things have developed so fast already, we have to remember the time where you’d ask it ‘2+2’ and it'd be like, ‘oh, it could be five’. And you're like, ‘what the hell is going on here? My calculator can do that.’ Now it can recognise, detect and encode the question as programming. And then of course the answer is correct.”

What do you read or subscribe to to stay up to date on what’s happening in this space? You can be as broad or specific as you’d like.

“The Information is probably worth reading, as well as the TLDR Newsletter.

I would recommend subscribing to a variety of views, so you can see how different people across different political systems are thinking about the space. Regulation is coming to AI, so it’s useful and important to keep across different government proposals for AI monitoring and management.”

Any parting words, or broader recommendations to any reader who’s made it this far?

“I think that everyone needs to know how to train an AI model.

They need to go and do a one day course, maybe a one week course. They need to get more familiar with this stuff because it's going to have such an impact on everyone's life that not understanding it is going to be quite risky for people. We're going to be asked to vote on this stuff soon and the more informed you can be, the better everyone can understand how AI systems work and make informed decisions.

There are many things we need to solve from an AI perspective, and we need people who are more aware and more informed to help shape the future. I truly believe this. 

For example, when I’m in San Francisco I chose the suburb that I’m in because it's inside the Waymo coverage area. This means I get to see what level four, level five autonomy looks like on a daily basis through my Tesla.

It’s important to be close to and connected to leading innovation in the space, and I think it’s important that everyone can make an informed decision based on their understanding of how it works.” 

Strong Compute is currently hiring for a range of engineering roles across Sydney and San Francisco. Submit an expression of interest via their website, or reach out to the team via careers@strongcompute.com for more information. 

Stay in the loop

Subscribe to our newsletter for updates delivered directly to your inbox.