Guest: Krishna Rangasayee, Founder and CEO, SiMa.ai
Host: Seth Earley, CEO at Earley Information Science
Published on: January 16, 2026
In this episode of the Earley AI Podcast, host Seth Earley welcomes Krishna Rangasayee, Founder and CEO of SiMa.ai, for a grounded conversation on what it takes to make AI work in real world environments. The discussion focuses on moving beyond hype to address the practical challenges of deploying AI systems that are efficient, scalable, and reliable at the edge.
Krishna brings decades of experience across hardware, software, and AI systems design. He shares why many AI initiatives struggle outside controlled environments and how organizations must rethink architecture, performance, and context when deploying AI closer to where data is created and decisions are made. The episode explores why efficiency is not just a cost concern but a core enabler of real time intelligence across industries.
Key Takeaways from this Episode:
Common misconceptions about AI readiness and why scaling models alone does not lead to success
Why edge AI is critical for real time decision making, latency reduction, and operational reliability
How efficiency at the hardware and system level unlocks new AI use cases
The importance of aligning AI architecture with real world constraints such as power, bandwidth, and deployment conditions
Why organizations must rethink the balance between cloud and edge computing
How leadership and culture influence whether AI experimentation turns into production impact
Insightful Quotes from the Show:
"AI success is not about chasing bigger models. It is about understanding the environment where AI actually has to operate and designing systems that work within real constraints." - Seth Earley
"If you want AI to deliver value in the real world, efficiency has to be designed in from the start. Otherwise, intelligence never makes it past the lab." - Krishna Rangasayee
Links
LinkedIn: https://www.linkedin.com/in/krishnarangasayee/
Website: https://sima.ai
Ways to Tune In:
Earley AI Podcast: https://www.earley.com/earley-ai-podcast-home
Apple Podcast: https://podcasts.apple.com/podcast/id1586654770
Spotify: https://open.spotify.com/show/5nkcZvVYjHHj6wtBABqLbE?si=73cd5d5fc89f4781
iHeart Radio: https://www.iheart.com/podcast/269-earley-ai-podcast-87108370/
Stitcher: https://www.stitcher.com/show/earley-ai-podcast
Amazon Music: https://music.amazon.com/podcasts/18524b67-09cf-433f-82db-07b6213ad3ba/earley-ai-podcast
Buzzsprout: https://earleyai.buzzsprout.com/
Podcast Transcript: Scaling AI from Cloud to Physical World Through Power-Efficient Edge Computing
Transcript introduction
This transcript captures a conversation between Seth Earley and Krishna Rangasayee exploring the critical transition from cloud-based AI to physical AI systems, examining the architectural requirements, power constraints, and lifecycle management challenges organizations face when deploying AI in robots, vehicles, and edge devices where performance, efficiency, and reliability are paramount.
Transcript
Seth Earley: Record.
Seth Earley: Perfect.
Seth Earley: Great.
Seth Earley: Welcome to the Early AI Podcast. My name is Seth Early, I'm your host, and each episode explores how artificial intelligence is shaping how organizations manage information, make decisions, turn knowledge into real business outcomes.
Seth Earley: Today, we're talking about what it takes to scale AI in the physical world, not just in labs and demos, but in production systems at the edge.
Seth Earley: Robots, cars, medical devices, and other intelligent infrastructure, where performance, efficiency, and reliability are really critical.
Seth Earley: And as AI expands beyond centralized computing, questions around cost, power consumption, latency, operational readiness, those are all becoming very important. So, joining me today is Krishna Rangasai.
Seth Earley: They say it right? Close enough?
Seth Earley: enough for government work. Founder and CEO of Sima.ai, that's S-I-M-A. Krishna works at the intersection of AI software, specialized hardware, systems architecture, and his focus is on delivering efficient, high-performance AI at the edge, enabling intelligent systems to operate closer to where the data is generated and where decisions are made. Krishna, welcome to the show.
Krishna Rangasayee: Thank you, Seth. Such a pleasure. Thanks for the great intro, too. Wow, I need stuff for later use.
Seth Earley: I know, it's always when people introduce you, you're like, oh, wow, did I… did I write that?
Seth Earley: Anyway…
Seth Earley: So, let's talk about… I'd like to start off with, you know, let's start off with… when people, you know, people think about AI, they're thinking about chatbots, they're thinking about, you know, cloud matters. What do you mean when you talk about… when we talk about AI in the physical world, how is that fundamentally different, and what does it really mean?
Krishna Rangasayee: Yeah, no, and I think, I mean, AI has gone through its evolution, where I think the footprint was mostly consumer.
Krishna Rangasayee: along the lines of ChatGPT or LLM agents. And in the cloud construct, we have derived benefits of AI for 10 years, 15 years.
Krishna Rangasayee: What I think we're gonna… and that's taken, if you will, an evolution for the last 10-15 years. AI in the cloud, AI in the consumer persona.
Krishna Rangasayee: I'm totally convinced that where AI will really bring value to human beings is when it touches our lives every day.
Krishna Rangasayee: So for cars, for medical devices, for home appliances, things we live and breathe and touch every day, as AI starts coming into our lifestyle, is when I think we're going to truly see the benefit of AI.
Krishna Rangasayee: And so, it's not that the cloud is going to go away. I mean, there is a reason and there's a value that the cloud's still going to retain, but increasingly, you're going to find a balance between the physical world we live in, benefiting from it, and coexist with cloud infrastructure.
Seth Earley: Hmm.
Seth Earley: So, you know, we… you said that the industry is moving from rules-based computing to machine learning, and what has really changed in the last couple of years? I mean, we know that… that, this has been emerging, we know it's been around for a long time, we know that, you know, the AI in lots of incarnations, but
Seth Earley: What do you consider as the real inflection point? Obviously, large language models are critically important, but what else would you say is really causing.
Krishna Rangasayee: So, I mean, if you really take a step back.
Krishna Rangasayee: neural networks have been around forever, so this is not a new concept. What's really, I think, been interesting is our ability to be able to deploy neural networks in a commercially scalable way.
Seth Earley: Good thing.
Krishna Rangasayee: difference that chips are enabling today, right? So we have seen the benefit of what was CNN, or convolutional neural networks.
Krishna Rangasayee: We saw the evolution of transformer-based architectures.
Krishna Rangasayee: And broadly, though, LLMs are capturing the mind of everybody today, reasoning-based architectures was for the first time in humanity.
Krishna Rangasayee: Ever being able to be commercially deployed.
Seth Earley: A form factor that's digestible in cost.
Krishna Rangasayee: and empowering.
Krishna Rangasayee: And we are seeing this in the cloud, along the lines of chat agents or chatbots, but you're going to see reasoning-based architectures, and the compute paradigm shift from rule-based to analytical to now context and memory-aware, like human beings think.
Krishna Rangasayee: This is for the first time in the history of humanity.
Seth Earley: Right, right.
Krishna Rangasayee: You're all looking at this big, big tipping point.
Seth Earley: Yeah.
Krishna Rangasayee: I'm sure historians are going to write about this tipping point for a long, long time to come. So this is the Renaissance age for computing, right? Right. So we have an amazing time.
Seth Earley: Yeah, yeah, it really, it really is, and every day, every time I use a large language model, I'm astounded, you know? And you start taking that for granted, you start, you know, then you start getting angry when it misses something, you know?
Seth Earley: When it's getting…
Krishna Rangasayee: I'm kidding.
Seth Earley: these complex documents, and it's… and you're asking all these questions, it's like, you missed this point over here, what's wrong with you? You know, oh, I'm so sorry, it'll apologize, and yeah, yes, you made a good point, I, you know, I'll go back and rethink this, and it's like, you're… you're… you're really interacting with something that is kind of mind-boggling and…
Krishna Rangasayee: Absolutely. No, I would say, I mean, I think LLMs are the reasoning architecture flavor of the year.
Krishna Rangasayee: I promise you that every year, there's going to be an evolution. We are going to get better and better at it, where models will get more accurate, more personalized, and more context-aware, and be able to really be nimble and quick.
Krishna Rangasayee: And we are seeing so many implementations of it, even in the physical AI world.
Seth Earley: Oh.
Seth Earley: So, let's talk… you mentioned, we talked a little bit about inference, and a lot of times people, you know, blur training and inference, right? They think about just AI and training these models, but… but they're very different. What is important, especially with physical AI systems, in terms of the difference between training and inference?
Krishna Rangasayee: Sure. So, training is a very different universe than inference, right? And so, I'll take a step back. So, training is where you're… you have a set of data.
Krishna Rangasayee: You're training a model to really, recognize the data and really learn along with the data set.
Krishna Rangasayee: So, when you're given a new data point, you know what to recognize and what probability to place that at, right? So, in the early days of AI, training from a revenue and from a market focus has taken on the biggest footprint. So inference is relatively a smaller market compared to training.
Krishna Rangasayee: But once you've trained these models, and once you start deploying them in the cloud, and increasingly now in the physical AI world, it's going to be more and more an inference world.
Krishna Rangasayee: you're going to see that the inference market is going to be 10x, maybe even an order of 100x the size of the training market going forward, because you don't need training everywhere. Once you have really trained a model, it's now scale of deployment.
Krishna Rangasayee: And it's going to be more and more an inference world, particularly in the physical AI world. And SEMA, as a company, if you will, we are an inference company.
Seth Earley: And inference is really giving the AI model the problems to solve. It's actually doing the work once you have the training completed.
Krishna Rangasayee: Correct. And that requires, really, a different way of thinking about power, and a different way of thinking about architecture, and so on. Absolutely.
Seth Earley: What, you know, when you, when you, think about that, you know, what is… what's…
Seth Earley: particularly important in physical AI and AI at the edge. And, you know, what makes it a different problem when it lives in a robot or a car or a medical device?
Krishna Rangasayee: Absolutely. So, let's start with the basic of problems. While cloud really doesn't care for power, they should.
Seth Earley: You will have to.
Krishna Rangasayee: They have to, but at this point, people are having a fear of missing out and a mad rush to enable capability without thinking through the power implications.
Seth Earley: Right.
Krishna Rangasayee: At the physical AI world, you actually have no option at all but to really think through power.
Seth Earley: Hmm.
Krishna Rangasayee: AI world tend to be 5 watts, or 10 watts, and that's the limit of what you really have, right? So power and thermal are huge limiters. So you cannot take a GPU that you use for training, or a GPU that you're using in the cloud, and assume the same architecture will hold the same value at the edge.
Krishna Rangasayee: And at the edge, power and thermal are unforgiven. You almost go the other way, saying, hey, I have 5 watts, I have 10 watts, what capability can I fit into 5-10 watts? So that's problem number one.
Krishna Rangasayee: Problem number two is the sheer variability and the sheer scale of it. In the cloud, hyperscalers, it's 7 customers.
Krishna Rangasayee: The physical AI world is thousands, tens of thousands of customers.
Krishna Rangasayee: So the scalability of the problem in being able to be cost-efficient.
Krishna Rangasayee: And also, really, software flexibility is really at the very key, right? So, the needs of the edge or the physical AI world are very different. The third large variable is in the physical AI world, you and I are living with robots, you and I are living with cars, safety.
Seth Earley: Security.
Krishna Rangasayee: privacy.
Krishna Rangasayee: And if you will, human-machine interface and the real world has huge implications architecturally that you don't need to worry about on the cloud.
Krishna Rangasayee: If your chat GPT or your chat LLM agent made a mistake.
Seth Earley: Yeah.
Krishna Rangasayee: nobody died.
Seth Earley: Right.
Seth Earley: Right.
Krishna Rangasayee: That's not the case in a car. Right.
Krishna Rangasayee: You're a robot, right? Yeah. So you're living through safety, privacy.
Seth Earley: No.
Krishna Rangasayee: security constraints that are radically different than in the cloud. So…
Krishna Rangasayee: Our outlook is that I think there will be a new class of architectures that'll have to be built for the needs of scaling, for the physical AI world, that are quite divergent, if not quite different than what's being used today. And that's our thesis behind why we started our company, too.
Seth Earley: Yeah, and so what are the misconceptions? What do executives not understand? What do they… when they look at deploying AI into real world systems, what are they not understanding, or what are they missing, or what are the misconceptions?
Krishna Rangasayee: So…
Seth Earley: I would say a bit… You're touching on them here, but I just wanted to kind of…
Krishna Rangasayee: Now, with all new technologies introduced into traditional industries, the learning curve of what it takes to scale something in production has a longer timeline than everybody expects.
Seth Earley: Right.
Krishna Rangasayee: One popular misconception people have is they think AI is magic.
Krishna Rangasayee: And then you just show up, and as somebody said, it's like a Betty Crocker moment. You just add water, and it works.
Seth Earley: Oh, wait.
Krishna Rangasayee: far away from the truth. AI is really a large predicator on the quality of the data that you train this on, and the quality of the data that you have.
Krishna Rangasayee: And what's the other adage? It's garbage in, garbage out.
Seth Earley: Sure.
Krishna Rangasayee: Right? And so people overlook the fact that a model is only as good as how well it's trained. A model is only as good as how well it's deployed.
Seth Earley: So that's one popular misconception.
Krishna Rangasayee: The second one is people don't understand the life cycle of AI, in that it's not about showing a proof point and getting it done. In rule-based compute, you get the problem right once.
Seth Earley: You can then scale that as long as you want.
Krishna Rangasayee: And the rule-based construct doesn't change.
Krishna Rangasayee: In AI, there's every day a revisit to where's my model, what's my drift, what's my false positive, what's my false negative? Am I learning from the mistakes that I'm looking to? And there's an ML operations infrastructure, or ML ops infrastructure, that helps lifecycle maintenance.
Krishna Rangasayee: And so AI is a perennial, everyday, learning, breathing entity. Nurturing that infrastructure to scale is radically different than what people are used to, right? So these are new paradigms.
Krishna Rangasayee: I have no doubts that we'll look back in this phase and laugh at some point. I joked in some of the forums that when we discovered fire, we went through the same hazing ritual, too. Oh, exuberance will.
Seth Earley: Yeah, right.
Krishna Rangasayee: Pragmatism, it'll settle in at… Yeah, it's.
Seth Earley: stuff is really useful, yeah. But wow, it's dangerous.
Krishna Rangasayee: no different, but AI is…
Krishna Rangasayee: overly hyped to be very simple. Particularly the physical world, it's a lot more complicated.
Seth Earley: Yeah, yeah, and what do people underestimate? I mean, you did… you know, you're alluding to some of these things, but what do they really miss out? What specifically do they not understand, or what do they underestimate when they're trying to build this out in the physical world?
Krishna Rangasayee: No, I think they need to think through the life cycle consequences.
Krishna Rangasayee: So that's number one. Number two, we… I mean, less in the physical AI world, but we see a lot of companies getting into AI for the sake of AI.
Krishna Rangasayee: And what really matters at the end of the day is two things. What is my use case?
Seth Earley: Yeah. What's the tangible delta I'm going to bring with an AI-based infrastructure?
Krishna Rangasayee: And third, no wonder, I mean, no doubt's the capitalistic element of, will this make revenue? And will it make me profit?
Krishna Rangasayee: So these usual suspect things cannot be just saved by a magic wand of AI. AI is a tool, but it doesn't take away for use cases, doesn't take away for deliberate lifecycle maintenance and management outlooks, nor does it take away for revenue and profit.
Krishna Rangasayee: Take the consumer world today. I promise you, 80-90% of the companies that are building wrappers around LLM agents.
Seth Earley: May not be around in a long time, because at the end of the day, it's a capitalistic world.
Krishna Rangasayee: Sure. Any non-profit matter.
Seth Earley: So, you talk a little bit about lifecycle management. What are the stages that you need to think about when you're looking at AI and lifecycle management?
Krishna Rangasayee: Sure, so I think it's, in some ways, no different than what we've traditionally done, but there's…
Krishna Rangasayee: devils in the detail of the amount of variability and the amount of AI that you need to face. So… so I would say, no doubts, everybody starts off with a POC, or a proof of concept phase, and saying, hey, I've done this all my life with this outlook, or it's a new problem solving.
Krishna Rangasayee: Let me prove out the capability of a proof of concept. What will I get out of it? What cost? What power? What accuracy? What capabilities will I get out of it? So that's one phase where it's a very different set of dynamics and needs, even a very different set of an organization that does that.
Krishna Rangasayee: Then you have pilot phase, where you say, okay, I'm going to deploy it over 10 systems, and I'm going to now get an aggregate outlook of, if I do it over 10 systems, or a projected 100,000 systems one day.
Krishna Rangasayee: How would it look like, and how would I deploy?
Krishna Rangasayee: The third one is, I'm now in volume production.
Krishna Rangasayee: and I've deployed it, how do I maintain it? How do I upgrade my models? How do I get better data? How do I make my system more accurate? How do I make it more feature-capable or feature-rich?
Krishna Rangasayee: And then the last one is.
Krishna Rangasayee: maybe this has reached its final stage of value, and I need a totally different foundation for it. How do you phase out a existing production program, and how do you phase in?
Seth Earley: Hmm.
Krishna Rangasayee: What I'm describing to you is a standard engineering deployment phase that we have lived with for 100 years, or maybe even more than that. But the AI-centric implications of it brings an entirely different outlook.
Krishna Rangasayee: An entirely different set of organizational capability that not every company in the world is sitting with a commensurate.
Seth Earley: Hey, I'm ready to go.
Krishna Rangasayee: So we're all.
Seth Earley: practicing and preaching and learning simultaneously. Yeah, true, yeah.
Krishna Rangasayee: And so… and the world's moving so fast. The pace of innovation is so radically different. And most of the physical AI customers have been around for 50, 100 years.
Seth Earley: And they built their strength and competencies in different areas, and AI is a new muscle.
Krishna Rangasayee: That they have to gain. And so, expectedly, it's gonna take time, but none of us are patient, and we all want results tomorrow morning.
Krishna Rangasayee: So, human beings, as long as we remain who we are, we'll struggle through these every day. But if history is a proof point, we'll all get across the finish line.
Seth Earley: I'm really excited for the new world that they're getting into.
Seth Earley: And where are you seeing the greatest, impact? And give us some examples of where, you know, the efficient edge inference has changed what systems can do. Sure. Kind of some examples of what that systems look like and what's possible.
Krishna Rangasayee: Absolutely, and so, I would say, I'll park it into things that we've already seen already, and there are things that I think we're gonna see.
Krishna Rangasayee: One simple area where… Things have changed a lot as drones.
Seth Earley: Hmm.
Krishna Rangasayee: So these used to be pretty… mostly consumer-ish toy elements.
Krishna Rangasayee: Right.
Krishna Rangasayee: And you're gonna see that drones are gonna be a huge component of our life in mobility. In 2-3 years, very rarely.
Seth Earley: is somebody going to drive to your place to hand off a DoorDash or a food delivery?
Krishna Rangasayee: or in post-delivery. Majority of it is going to be drone-based.
Krishna Rangasayee: And so, this is one area where power matters a lot.
Krishna Rangasayee: There's a lot of mobility and mapping and targeting and safety that needs to come in, but this is something that's already at scale today.
Krishna Rangasayee: you're gonna see more of it. That's one simple example of it.
Krishna Rangasayee: The other area where you're going to already see AI and really kick in is really in industry floor automation.
Seth Earley: So…
Krishna Rangasayee: You're really seeing capabilities or accuracy elements or productivity elements that were impossible previously with traditional rule-based compute.
Krishna Rangasayee: And so, as our supply chains are getting robust.
Seth Earley: And given the challenges we've had in labor shortages, you're seeing a huge deployment of robotics infrastructure.
Krishna Rangasayee: And also, in capabilities and industry floor automation. These are two examples of things already in production scale and well behind us from an AI capability.
Krishna Rangasayee: Some of the new areas where you're gonna see is, if I were to pick a horizontal that's just gonna pervade all markets.
Krishna Rangasayee: conversational AI.
Seth Earley: It's really gonna be huge.
Seth Earley: So, where you could talk to machines like you talk to a human being. Right. Machines reason.
Krishna Rangasayee: along with you, like we do with human beings. No longer is it, hey Siri, hey, Alexa. No longer is it, hey, Mercedes, wake up.
Krishna Rangasayee: You're just going to be context-aware, memory-aware, and everything that you do in the cloud is going to be locally embedded on the device, the car. You're driving in your car, and you're no 5G access, you do not want to be handicapped.
Krishna Rangasayee: Right? And so you're going to see a huge paradigm in edge-based appliance, and power matters a lot. Most cars are going to be EV-based, and running 1 kilowatt just for compute and AI is not tenable.
Seth Earley: You wanna see this?
Krishna Rangasayee: and robotics in a very big way.
Krishna Rangasayee: Power is at the very core of it, power efficiency, and I'll leave you with the last vignette, which I think I'm sure everybody's going through this, is data centers are already 4.5%, 5% of the global power consumption.
Seth Earley: And we are poised in 2030 or 2032 to double that.
Seth Earley: Hmm.
Krishna Rangasayee: 10% of the world is going to be just on data centers in a very, very few years.
Krishna Rangasayee: But contrast, the physical AI world is 28% of the global power consumption today, already, pre-AI.
Seth Earley: If that pivots on AI and has no control on power.
Krishna Rangasayee: I don't know what figure out.
Seth Earley: Hawaii.
Krishna Rangasayee: So getting ahead on power at the edge is really, really critical.
Seth Earley: Yeah, yeah. And so, where do you… so tell me a little bit more about where you play in terms of that, space. So, you are offering software and hardware. Correct. And talk a little bit about your kind of philosophy. Not meant to be a sales pitch about your company, but just to kind of talk about, conceptually, where you are, and how you address these problems, and then
Seth Earley: you know, how organizations kind of engage with you? Who are your…
Krishna Rangasayee: Yeah, so I've been a student of the physical world for all my career, and so this is an area where I've spent a lot of time, and in hindsight, built good businesses and technology behind it.
Krishna Rangasayee: My observation is that I think there are two things needed for physical AI to scale meaningfully.
Seth Earley: Two things.
Krishna Rangasayee: One is performance per watt.
Krishna Rangasayee: Not just performance alone. Everybody wants amazing performance.
Krishna Rangasayee: Performance per watt is really, really critical.
Krishna Rangasayee: Because you're either limited by power, or you're limited by the ability to dissipate heat, which is thermal. Table stakes.
Seth Earley: Right. Number two.
Krishna Rangasayee: This is a world that's diffused. You don't have 5 customers. You have 40, 50,000 customers globally.
Krishna Rangasayee: They need to be self-managed, and they need to be able to derive the benefit of AI without the learning curve of AI.
Krishna Rangasayee: So, scalability and ease of use in software is the second vector that's really very critical.
Seth Earley: People need to be self-managed, and you need to make AI very digestible.
Krishna Rangasayee: The analogy I draw is.
Krishna Rangasayee: Before iPhone, we used to have thick manuals that came with every one of our phones.
Seth Earley: You had to press 5 buttons to get a function going.
Krishna Rangasayee: We lived at our BlackBerrys, we lived with our Nokia phones.
Krishna Rangasayee: comes iPhone, no manual, you press a button, you move on.
Krishna Rangasayee: That paradigm shift needs to happen to really hit scale in production physical AI, right? So, no doubts, those are our two
Krishna Rangasayee: Key vectors of capability.
Krishna Rangasayee: And we have done a really good job with both of them.
Krishna Rangasayee: And to your point, I'll not make this a marketing pitch for our company, but I am entirely convinced that those are the two critical things to solve if you want to hit scale and physical AI.
Krishna Rangasayee: And companies that do a good job with it, I'm entirely sure, are going to be household names in a few years.
Seth Earley: Right.
Seth Earley: So,
Seth Earley: What do you see as… well, let me ask you a question. A lot of times, you know, NVIDIA shows up as kind of the default choice for, you know, AI hardware. Where does that work well, and where do you need to look at edge AI players differently?
Krishna Rangasayee: Yeah, so… I'm amazed at what NVIDIA's done in the last 10 years. I don't know if anybody in the planet does not.
Krishna Rangasayee: Amazing company, and if you will, they've kind of led the charge on AI and AI adoption globally.
Krishna Rangasayee: But our thesis is that, I think, they're really fundamentally built around a GPU-centric architecture that was built for graphics.
Krishna Rangasayee: And they've extended that capability, and it's fit well in the construct of data centers and in the cloud. They're a general-purpose architecture.
Krishna Rangasayee: But you're gonna see more and more targeted silicon and software architectures that are diversion for the needs of different markets. I don't believe there's a one-size-fits-all where a GPU-centric architecture can scale for all problems globally.
Krishna Rangasayee: In an early, nascent, proof-of-concept, early production, no doubts, I think that's gonna be the case.
Seth Earley: But you're going to see very diversion architectures to solve different problems.
Krishna Rangasayee: Particularly as it comes to the physical AI world, like I told you. Power and thermal matter a lot.
Krishna Rangasayee: Software scalability, I don't think you could really be a CUDA company and scale meaningful.
Seth Earley: Hmm.
Krishna Rangasayee: would need to be open source. You need very different architectures, either by silicon or by software, to be meaningfully scaling.
Krishna Rangasayee: This is where I think you're gonna see
Krishna Rangasayee: That a one-size-fits-all is not gonna work.
Krishna Rangasayee: And our approach is open source, our approach is solve for power and power efficiency from day one, and we are a radically different architecture than GPUs, right? So that's really, if you will, as AI scales, you're going to see a lot of diversion architectures really come into play.
Seth Earley: Say more about your architecture and how it's different. You know, maybe you can dive in a little bit of some of the details.
Krishna Rangasayee: At the highest order bit, we had the benefit of a clean slate.
Krishna Rangasayee: And we said, if we were to land from Mars, what would be built? So that's a huge luxury that some of the large public companies do not have, right? So I can do exactly what I want to look at. Number two, ours is a very software-centric architecture.
Krishna Rangasayee: We knew that AI is going to move at a rapid pace from a software evolution, but silicon cadence is once every three years. So how do you really deliver something that's market relevant, while silicon has a very different cadence than AI?
Krishna Rangasayee: We decided to really take a very software-centric architecture innovation, and our approach has been very different.
Krishna Rangasayee: Third.
Krishna Rangasayee: 70% of the power and the performance gains are more on data management and memory management, and almost less to do with AI. We have taken full advantage of that architecturally.
Krishna Rangasayee: And built something that's very lightweight on on-chip cost and on-chip memory.
Krishna Rangasayee: But we have relied on an architecture that uses external DDR, so people can load as complicated a model that they want, but we have clever software architectures that work with it.
Krishna Rangasayee: So that we meet the customer's needs in cost, power, and performance, while really giving them scalability on software, right? So these are the things that we have done that's very different.
Krishna Rangasayee: Our architecture is diversion, and… Quite different than anybody else's.
Seth Earley: And, what about the compatibility with existing software and…
Krishna Rangasayee: Absolutely.
Seth Earley: approaches and methodologies that people are familiar with.
Krishna Rangasayee: So while our architecture is divergent, we are completely open source compliant. So what I mean by that is we can work with any ML framework.
Krishna Rangasayee: whether in ONIX, or on PyTorch, or TensorFlow, we take them as they are.
Krishna Rangasayee: We're also on Linux.
Seth Earley: They're also on OpenCL, OpenCV, so these are…
Krishna Rangasayee: well-known practices of 10, 15, 30 years, and so everything we do is open source.
Seth Earley: And I've joked.
Krishna Rangasayee: We are the Ellis Island of ML.
Krishna Rangasayee: Hmm.
Seth Earley: I'm gonna see him.
Krishna Rangasayee: Or, give us your tired, give us your…
Seth Earley: But once they come in there, American, Yeah.
Krishna Rangasayee: That's funny. So that's the closest analogy I draw in, in that what we do inside is pretty complicated and quite different.
Krishna Rangasayee: But our interfaces are all industry standard. So, friction point in migrating from a known base to us is low, if not nothing, so…
Seth Earley: And what kind of companies are…
Seth Earley: actively investigating your architecture. So, these are smaller, innovative, entrepreneurial companies, or you'll also have some of the bigger players. Who's just looking at your approach and your methodology?
Krishna Rangasayee: We have both camps.
Seth Earley: Yeah, we have absolutely both camps, and I would say.
Krishna Rangasayee: In the 7 years we run the company, we have gone from early adoption in the market to fast mowers, so we are in that phase right now.
Krishna Rangasayee: many of the big companies… now it's becoming more and more clear that they have AI and have-nots on AI. There's a market separation in even their commercial successes. So we are seeing everybody jump in.
Krishna Rangasayee: We have a good, happy medium of both. Early fast movers that are nimble and small, but also large companies that are also becoming nimble and really moving forward, so we have both.
Seth Earley: And then, what is your competition, like? In other words, you said there's a lot of these players, I imagine there's larger organizations, and then niche players, so tell us a little bit about that.
Krishna Rangasayee: Yeah, so I would say, we primarily end up competing mostly with, our GPU friends.
Krishna Rangasayee: We also occasionally see DSP-centric competition, primarily from our friends in San Diego.
Seth Earley: And we compete with them well as well, particularly as it comes to automotive and automotive infotainment.
Krishna Rangasayee: There's a lot of smart, amazing startups.
Seth Earley: That have come around.
Krishna Rangasayee: And… but most of them are ML accelerators only.
Krishna Rangasayee: They don't solve the system, system-on-a-chip approach that I think we take.
Krishna Rangasayee: And so that's given us a large differentiation against them.
Seth Earley: And the accuracy for the industry is definitely software.
Krishna Rangasayee: And that's an area where we have really done well. So, if I were to summarize, I would really say our primary competition ends up being our GPU friends. Primary competition.
Krishna Rangasayee: But we are respectful of everybody's capability. It behooves us to be paranoid and not take anything for granted.
Seth Earley: Right.
Krishna Rangasayee: You have to… we're a startup, and we have to fight like hell for anything, everything.
Seth Earley: How many folks are you, these.
Krishna Rangasayee: So we are 200 plus.
Seth Earley: Okay, yeah.
Krishna Rangasayee: And, our daily struggle is a David versus Goliath story.
Seth Earley: Hmm.
Krishna Rangasayee: You know.
Seth Earley: So, when you look ahead, where do you see the biggest opportunities in physical AI? And where does the hype, particularly around humanoid robots, get ahead of reality? I mean, again, it's hard to…
Seth Earley: Calibrate, because things change so quickly, but where… what do you see as sort of the near-term and the midterm future?
Krishna Rangasayee: So, I would say, let's… I joke this in a different form. We tend to be totally wrong on expectations on things near term.
Seth Earley: Yup.
Krishna Rangasayee: And we tend to be totally wrong on expectations on the longer. So that's just how things work.
Seth Earley: You underestimate what you can do short-term, and you underestimate what you can do long-term. More eloquently put than I did. Yes, correct.
Krishna Rangasayee: And so, I would say.
Seth Earley: Overestimate what you can do in a day, underestimate what you can do in a year.
Krishna Rangasayee: So, steady state, I would say the volume drivers for physical AI are no doubts going to be automotive.
Seth Earley: Yup.
Krishna Rangasayee: Robotics.
Krishna Rangasayee: and drones. Those would be the 3 volume drivers, if I were to pick.
Krishna Rangasayee: And I'm using robotics as a broad category. Yeah. As you know, there's so many sub-segments even within robotic. But you're going to see this everywhere in medical, you're gonna see this in ag tech, you can see it in transportation, you're gonna see it in aerospace and defense, you're going to see it in smart vision systems. So, physical AI is going to scale everywhere, but if I were to pick 3…
Krishna Rangasayee: applications, it's drones, automotive, and robotics, right? So that's where they are.
Krishna Rangasayee: And we're seeing the same three vectors today.
Krishna Rangasayee: And I'd say that I think as and when these systems get more and more AI-enabled, you're gonna see a lot more growth in the vector.
Krishna Rangasayee: As it comes to humanoid robots,
Krishna Rangasayee: my view, and I think maybe I'm a little contrarian in where things are, or maybe I'm not, is
Krishna Rangasayee: In some ways, humanoid robots is a more complex problem than autonomous cars.
Seth Earley: Sure.
Krishna Rangasayee: There is tremendous hype on humanoid robots right now, and
Krishna Rangasayee: We are participants in it, too, so I should not be really poo-pooing that too much. We're enjoying the benefit of that, too. But the true production capacity for general-purpose humanoid robots in that form factor
Krishna Rangasayee: Where are the safety considerations, the scalability considerations, the cost considerations, the power considerations are all factored in.
Krishna Rangasayee: It will absolutely happen, but in my view, at a much longer timeline than they're thinking through today.
Seth Earley: What do you see as a timeline?
Krishna Rangasayee: I would say 10-15 years.
Seth Earley: Yeah.
Krishna Rangasayee: And so, I am definitely a contrarian in that, and particularly for somebody that's living in the middle of it, I think I hurt my colleagues and friends a little more with my protracted timeline. But I would say cost alone, right? And it all comes down to
Krishna Rangasayee: What's the cost the market's willing to bear?
Krishna Rangasayee: If you want things at $10,000 to $15,000, or a humanoid channel-purpose robot, or somebody to be in your home, fold your laundry, do the dishes for you, and be a sentry, whatever you want it to be, or an AI companion, or AI guardian, if you will.
Seth Earley: Oh, dude.
Krishna Rangasayee: And the compute cost itself is $3,500 to $7,000.
Seth Earley: Yeah.
Krishna Rangasayee: it's hard for me to do the math, right? So my general rule of thumb is if your system cost is X,
Krishna Rangasayee: Your compute cost needs to be 1 tenth of that.
Seth Earley: Hmm. Meaningfully wild.
Seth Earley: Interesting.
Krishna Rangasayee: The systems that are being deployed today are 150 to 300 watts.
Krishna Rangasayee: obtainable in a real humanoid form factor, right? So we disagree with many of the peers that are rushing into it. Pilots are fine.
Seth Earley: You're saying the compute requires that level of power?
Krishna Rangasayee: Absolutely, right? And so, I'm pure alone.
Seth Earley: What does it need to get down to?
Krishna Rangasayee: Less than 50 watts.
Krishna Rangasayee: Right? And so… so, proof of concepts, and… I mean, I think there's a rush into, I'll get my functionality and worry about cost and power later.
Krishna Rangasayee: But this is like saying, let me get a Ferrari and I'll figure out how to make it a Fiat.
Krishna Rangasayee: Some things are not that practical, and so you almost need a very different outlook in saying, if my cost structure is $15,000, how do I back-solve for it meeting my functionality with the cost and power of compute that I need? So, I worry many are rushing into it.
Seth Earley: Saying, hey, I'll figure out the cost equation later.
Krishna Rangasayee: This is not a cloud where you could just throw things there and worry about things later, right? So you need to give it a lot of front… upfront thinking.
Krishna Rangasayee: I would submit to you that
Krishna Rangasayee: At this point, China, if you were at CES, has done a far better job in pragmatically saying, if my cost structure is X, $10,000, $15,000,
Krishna Rangasayee: How do I get the most compute, most power efficient, and how do I bridge the gap?
Krishna Rangasayee: And clearly, they're not there today, but they're iterating around a basis that's a far better one than the one that I think I see around particularly the Western Hemisphere.
Krishna Rangasayee: So, so our thesis is, we are built for cost and power.
Seth Earley: As a company, and so no doubts, I am biased in my outlook.
Krishna Rangasayee: But…
Krishna Rangasayee: I've never seen systems that want to be $15,000 built on a cost structure of compute alone at $3,500 or $4,000.
Seth Earley: Right.
Krishna Rangasayee: the economics just are not there, and the cost of humanoid robots are not just computer alone. I look at…
Seth Earley: the actuators, I look at the overall mechanical components that we have. Sure.
Krishna Rangasayee: And I would say the median of where the humanoid industry is on cost is $70,000 to $80,000 today.
Seth Earley: Hmm.
Krishna Rangasayee: Everybody wants to be 10 to 20,000.
Krishna Rangasayee: And… but the… how we get their story dithers, and this is where I think time is going to be a large, large consideration. And we've been waiting for autonomous cars for 15 years. Yeah, of course, yeah.
Krishna Rangasayee: Even mundane, simple tasks like what we're taking for granted on grasping and manipulation at your robotic arm at a human-like capacity is a very complicated problem.
Seth Earley: Yeah.
Krishna Rangasayee: As complicated, if not more complicated than autonomous cars.
Seth Earley: Right.
Krishna Rangasayee: I joke around this with everybody that's done this for 15, 20 years.
Krishna Rangasayee: For folks getting into it new, there is a huge exuberance.
Krishna Rangasayee: And for folks that have done it for a lot, there's a lot of…
Krishna Rangasayee: reticence in jumping into it. Somewhere there's a happy medium, and it's gonna happen between those two pole opposite views. So it's gonna happen, but at a much longer timeline. I submit to you on a totally different compute basis than what people are doing as GP-centric outlooks today.
Seth Earley: Tell me what your roadmap is, so what do you see, your organization in the next 3 to 5 years?
Krishna Rangasayee: Sure, yeah, so I think by now, our tech is well-proof.
Krishna Rangasayee: are, no doubts we have an evolution and a roadmap for improving our tech on software and hardware, so that's definitely a case. A lot happening on robotics and automotive, and we really want to double down on that.
Krishna Rangasayee: If I were to pick a macro element of where we are as a company, scaling. I mean, that's really our priority and focus. Scaling on our customers.
Seth Earley: Scaling on our commercial success, scaling on our partner ecosystem.
Krishna Rangasayee: Scaling on our go-to-market, scaling on our software teams and our support teams. So if I were to pick one broad theme of what I wake up to every day saying, my God, we need to do better, we need to do more, scaling.
Seth Earley: Yeah.
Seth Earley: Well, that's great. Well, what an interesting, area and full of change and innovation and fast-moving IP, and it really is kind of a fascinating… as you say, it's an interesting time to be involved and alive in this environment, and seeing what AI is doing to our world.
Seth Earley: So, this has been a great conversation. Thank you for sharing your insights and
Seth Earley: You know, what it takes to build these systems, and what we're in for in terms of real-world settings.
Seth Earley: And to our listeners, thank you guys for joining us for another episode of the Early AI Podcast. So stay with us, we're going to continue to explore how AI is moving from experimentation to
Seth Earley: impact, throughout, the, society and organizations. And again, thank you, Krishna. I really appreciate your time today.
Krishna Rangasayee: Thank you so much, Seth. Thoroughly enjoyed the conversation.
Seth Earley: Excellent. Well, we'll let you go, and we'll see everyone next time on the next Early AI podcast.