Earley AI Podcast – Episode 53: Insights into Data Science and AI Validation with Bartek Roszak

From Trading Floors to AI Quality Assurance: Building Accurate RAG Systems Through Rigorous Testing and Human Validation

 

Guest: Bartek Roszak, Head of AI at STX Next

Host: Seth Earley, CEO at Earley Information Science

           Chris Featherstone, Sr. Director of AI/Data Product/Program Management at Salesforce

Published on: August 6, 2025

 

In this episode of Earley AI Podcasts, we welcome Bartek Roszak, an expert in artificial intelligence and data science. With a career starting as an equity trader and evolving to lead AI strategy and implementation at STX Next, Bartek brings deep insights into the world of AI-driven trading and data quality improvement.

Join hosts Seth Earley and Chris Featherstone as they explore Bartek's experiences, the nuances of deploying AI in trading, and the future potential of generative models.


Key Takeaways:

  • Quality assurance represents the biggest gap in current AI development, with organizations lacking proper separation between model developers and validators leading to production failures.
  • The "80% fallacy" misleads organizations when initial AI prototypes appear promising but each incremental accuracy improvement becomes exponentially harder and more expensive to achieve.
  • Advanced RAG systems require re-rankers, retrained embedders customized to specific datasets, and fusion with keyword search rather than relying solely on vector similarity.
  • Knowledge architecture and metadata enrichment dramatically improve embedding quality by providing additional semantic signals that foundational models cannot infer from raw text alone.
  • Successful RAG implementations balance three critical factors: accuracy of responses, latency under two seconds, and intuitive user interface design that encourages adoption.
  • POC approaches lasting one to two months help organizations validate generative AI feasibility with their specific data before committing to year-long production implementations.
  • Human validation remains essential for evaluating generative AI outputs because automated LLM-based evaluation cannot reliably determine semantic equivalence between different narrative responses.

Insightful Quotes:

"The biggest gap that I see now is quality assurance. In normal software development we have QA teams separated from engineering, but we don't have good practices around AI QA. I've seen a lot of AI projects that failed because people who developed models were also the same guys who are evaluating the models." - Bartek Roszak

"People are really eager and it's very easy to ship an 80% ready prototype in even a couple of hours. And people then extrapolate that and say, if we did that in one day, what can we do in one month? It turns out that every percentage point that you try to get becomes almost exponentially harder." - Bartek Roszak

"For RAG system to be successful you need three very important things. One is accuracy, of course. Second is latency because you can easily create accurate system that will be really slow. And the third thing is really nice user interface. Without these three things, even if you have highly accurate system, people won't like to use it." - Bartek Roszak

Tune in to discover how rigorous quality assurance practices, strategic POC validation, and balanced system design separate successful AI implementations from expensive failures.

 

Links:
LinkedIn: https://www.linkedin.com/in/bartekroszak/

Website: https://www.stxnext.com


Ways to Tune In:
Earley AI Podcast: https://www.earley.com/earley-ai-podcast-home
Apple Podcast: https://podcasts.apple.com/podcast/id1586654770
Spotify: https://open.spotify.com/show/5nkcZvVYjHHj6wtBABqLbE?si=73cd5d5fc89f4781
iHeart Radio: https://www.iheart.com/podcast/269-earley-ai-podcast-87108370/
Stitcher: https://www.stitcher.com/show/earley-ai-podcast
Amazon Music: https://music.amazon.com/podcasts/18524b67-09cf-433f-82db-07b6213ad3ba/earley-ai-podcast
Buzzsprout: https://earleyai.buzzsprout.com/ 

 

Podcast Transcript: AI Quality Assurance, RAG Evolution, and Production Deployment Strategies

Transcript introduction

This transcript captures a detailed conversation between Seth Earley, Chris Featherstone, and Bartek Roszak about the essential but often neglected role of quality assurance in AI development, the technical evolution from naive to advanced RAG systems, and the practical considerations for deploying generative AI solutions in production environments.

Transcript

Seth Earley: Good morning, good afternoon, good evening. Whenever you're listening to this, welcome to the show. My name is Seth early and I'm Chris Featherstone. And what I'd like to do is introduce our guests. We're really excited and we're going to talk about many of the misconceptions around AI and general generative AI. We always talk about misconceptions. We're going to talk about retrieval, augmented generation, which of course is the thing that people think is, you know, the next best thing. And of course there's, there's good reason to think about that. We'll talk about some of the nuances of reg. Our guest today is a renowned expert in artificial intelligence and data science. He has a unique background that includes economics. He started his career as an equity trader on the New York Stock Exchange and positioned a transition from using Excel to do his data driven decision making to mastering advanced data science techniques with Python and R. He has a deep understanding of statistics and advanced mathematics and that coupled with his practical experience really propelled him into the world of AI because that was an ideal background for that. Currently he is head of AI at STX Next where he leads AI strategy in implementation for a range of innovative products projects. Bartek Rosak, welcome to the show.

Bartek Roszak: Hi Seth. Hi Chris. Thank you for the introduction. It's great to be here. So Bartek, were all of those, it seems

Chris Featherstone: like you had like as an equity trader building tools to help you trade faster, better, cheaper, stronger, whatever that is for you personally or was it for the companies you're working for? So like

Bartek Roszak: I would say both, of course I did it for, for like I started as a, as a trader for a propriety trading company during my like Excel time and after like I started doing data science, deep learning and stuff like that. I also worked for a hedge fund where I built autonomous traders. But also during my free time I like to, to experiment with trading. Of course I have my own trading strategies, rather long terms but, and, and I use a lot of data science there. But yeah, maybe Seth, we

Chris Featherstone: should get a offline tutorial from Bartek on our trading strategies.

Seth Earley: I'm in index funds. That's it. I'm in index funds. Long term buy hold. I'm, I'm not a day trader. I would have bought that

Bartek Roszak: pretty good strategy that is really hard to beat. Actually that were the

Seth Earley: Wall Street Journal has people throwing every year they do the darts. Have you heard about that? Completely random. And the random dart throwing typically outperforms the market. Who can say? Well, outperforms one of the more advanced, you know, gurus and star traders out there. But let's, let's talk about the space that we're in, the gen space, the AI space and talk about what are you seeing in terms of your projects, your initiat when you talk to executives either on the technology or the business side, what are the things that people don't understand or what do they get wrong? And I know that's kind of evolving right as the industry matures, as people become more aware, but give me your thoughts on what the common misconceptions are. So I think

Bartek Roszak: that in AI, especially in gen AI, we have this common thing with the new things that are really hyped. So like one and a half year ago we heard that prompt engineering is, is the next big thing. Then we heard that Iraq is the next big thing. I think that I hear now that multi agent approach is the next big thing. Right so, so, so I'm curious what would be the next big thing after that. But

Seth Earley: yeah, definitely rag approaches with

Seth Earley: prompt engineering. Yeah of course this is, this is all engineering the

Bartek Roszak: same stuff. All the same stuff because you know we start achieving like some big better results with prompt engineering but it was not enough. So we start doing rack. After we do rack we, we, we, we, we see the, the, the limitations so we start doing multi agents and, and stuff like that. So we still, it's like normal process of, of of development I would say. Right. So basically in terms of like what I see on the market right now, so, so, so we have this, this huge hype. A lot of companies wants to do AI. Some of them like are, are major in, in doing gen AI. Some started AI project that failed and they, they don't want to now to, to invest more of them more of money to, to, to projects like that. But there's also a lot of great success in, in applying AI generative AI rack and, and multi agent approach and, and, and we see more and more companies that want to, want to do the same. Right. So I got a quick question too because we

Chris Featherstone: usually ask. Seth usually starts out the misconceptions, you know, for AI and stuff. And I love the fact that you come from a deep data science background. Where do you think you know, kind of going up the stream a little bit. Where do the data science teams miss in a lot of this?

Bartek Roszak: Or do you feel like I think that the biggest. Yeah, yeah, sorry, go ahead

Chris Featherstone: Bartek, go ahead. Yeah, the huge gap that I, that I see now,

Bartek Roszak: it's quality assurance, right so basically in normal like software development we have QA teams and it's like pretty obvious that we need QA team that is separated from the engineering team to check how good, what is the quality of their work and stuff like that. But actually we don't have good practices around AI, like around QA and AI. And I've seen a lot of AI projects that failed because they didn't have proper qa. Because people who developed models were also the same guys who are evaluating the models, validating the models and deploying models on production. Right. So this is something that I believe is missing right right now in AI and it would and will change in the future. Even right now, like after I joined to, to to sdx Next like 1 year ago or something, the first thing I started to, to to doing, I started me meetings with QA team and saying to them that okay guys, you are great in software development qa, but you need to be good also in AI qa because this is the next big thing in QA I believe. And yeah, and now I start seeing like job offers like be machine learning QA engineer or AI validator. Yeah, so you're talking about quality assurance,

Seth Earley: right? When you say quality assurance and when you define quality assurance in AI or in data, you know, tell me more what you mean by that because I read a post the other day by a chief information officer or chief Data officer and he went on and on about quality data. Data quality is important. Data quality, data quality, data quality. Right. And yet it was very difficult to kind of understand exactly what data he was talking about. And he talked about training data. Well, training data can be interpreted in a lot of different ways. So when you talk about quality assurance and data quality and quality assurance of models and perform. Tell me more of what you mean specifically and then how does that fit in with the different data sources? So what do people need to do?

Bartek Roszak: Yeah, so that's actually a really good question because there's a, at every stage of AI development we need some QA and data. It's the first, first place where we need to apply some QA and, and there is like a bunch of techniques. It all depends on the use case. But it could be for example checking distribution of new data. Right? So if the distribution is the same that as have and our history date, this is one thing that we can check like missing data. Checking missing data. How many missing data do we have and stuff like that. Because if we retrieve some real time data, there always could be some, I don't know, some Interruptions, some bugs and some changes in the data. And we need to be alerted about it. We need to switch off models and stuff like that. So this is like a QA data part. Of course it is much bigger. But just to give you some example, then we have, then we develop models, assuming that we have high quality data, historical data as well as real time data is good quality, then we develop models. We can develop models with it. And after we deploy the models, even before we deploy the models, we need to validate this model. So there's a lot of things that can happen if you put data scientists to develop new models. They can by accident miss a lot of things. They can miss some data leakage, they can overfit the validation set. They can do a lot of things. And even extremely experienced data scientists, deep learning engineers, researchers can do it. I've seen many times did it, many did mistakes like this, many times. Even though I was like extremely aware of it, I would say so. Validation models before we run on production is another part of qa. And the last part of QA is what we do after deployment stage. So our model, we are pretty sure that our model has high quality data. It was trained well because we check it everything and now we deploy it on production and it should work well. Right? But maybe there is something missing and we need to monitor that. The most common thing that we need to monitor is the output distribution. Like there's of course more of it, but. But the most common is if we have for example, prediction, if we have like classifier, we predict multiple classes, we should like put attention to what kind of classes we predict the most often. If this is like different than it was during production, during training phase and stuff like that. So this is also like part of quality assurance, right? So when

Seth Earley: you are talking about building a model, training a model, are you actually saying that you're retraining a foundational model, you're not building a model from scratch? Or are you talking about training as in what the model needs to access in terms of data sources? So how are you defining. Because training can be defined in a lot of different ways. Training data, what I said it was

Bartek Roszak: more general approach to training machine learning models. Not only like generative AI models, but any kind of machine learning models? I would say so simply every time we use data to train models to make predictions on production, it was like general approach, but in terms of generative AI, the things are getting more and more complicated. Of course we don't need to train the model. Maybe we want to Fine tune model. But this is not like this is a rather rare case but at the end we can have like really hard times to, to evaluate quality of our solution. If depends it could be rack, it could be just prompt engineering, it could be a multi agent approach. But it's really hard to evaluate how accurate our solution is. Right. Because if we would like to have system that is able to answer questions about our knowledge base, we have, I don't know, we have some informations about our company internal policies and stuff like that and we want to create Rack system that, that we can query with some, some information and, and how we can check how we can evaluate that our system is answering and correct way. We can create like evaluation set correct question and answers but it's not obvious. This is like we cannot like perfectly say that we have 80% accuracy, 90% accuracy. Sometimes model can, can give you right answer but it will be different than the golden standard. So we have golden use cases where we know the

Seth Earley: answers but those are narratives and the narratives can change. And so then we use a, use a large language model to compare those narratives and to provide some prediction of accuracy or quality.

Bartek Roszak: Yeah, we can do it like that. Yeah. So you have to kind

Seth Earley: of or a human needs to look at them and say these answers are equivalent. Right. Actually my personal view is that we don't have

Bartek Roszak: like other good strategy than just manually check the answers and because we would like to check the number that is statistically significant, it's really hard to evaluate model after each change. Right. So we of course we use some automatic system. We use LLM to check LLM answers and stuff like that. But this is not like I would say perfect step. You need human in the loop. Yeah.

Seth Earley: For various innovations. Now going back to kind of the misconceptions, one of the things we talked about on our prep call is that people expect it to work. Like tell me more about what you've experienced when, when it comes to Gen AI and I think there's a little, there was a little bit of naivete earlier in the market and now that's kind of been replaced by a more sober minded approach to say this but tell me more about what people's expectations are and then what, what you have to do to kind of mitigate or deal with those or manage those expectations. Yeah, so. So, so of course

Bartek Roszak: like expectations are high, really high because like the information that we hear about Gen AI are, are so promising that that expectations are at so high level. This is really hard to manage it. So basically our Approach because we, we never know if we the Gen AI is really brand new technology and we, and we are not sure if it will work. If it work correctly with your data, we need to check it. So we, every time we start a project with Gen AI we propose to do POC first. So quick low budget project that will just show what kind of possibilities we have with Gen AI. So for example, if you would like to develop RAC system for your data, I would propose you to do it in like one or two months. Just deploy like default version of the system, fine tune it a little bit and just to check how accurate it is, just to give you to play with it to check if this is enough for you. And after that we will create a report what kind of techniques we used and what are our propositions to what are our next steps that we should take and what could be expected accuracy if we put more effort to it. Right, because if you would like to have like Rack system that is really accurate and you have complex data, it could take one year of like two or three engineers like doing fine tuning, researching and making system accurate. Right. So talk a little bit about how

Seth Earley: RAG has evolved. So originally we were saying well ask your LLM for the answer and the LLM would come up with an answer that may not be based on reality. Right. So it would hallucinate. And then people said well use a knowledge base and, and then that kind of. But then that fell back on the challenges of any basic knowledge base. And what I saw initially were people just using full text search, keyword search and that's going to be as good as your keyword search is today, right? If you don't have finely tuned content and you don't have things like thesaurus structures or you don't have semantic search or you're not using parametric search or whatever, it's going to be bad, right? If you have not. If you have crappy content and you're using sparse keywords and ambiguous queries, it's going to be like search, it's going to suck. Like search sucks. And then we got into advanced rag, which was which. So talk about naive rag, advanced rag and then talk a little bit about what modular RAG is from your perspective. So naive RAG is just, let's just go off and search. We'll use the LLM to interpret the question, then we'll go to a data source and then we'll retrieve that and then we'll make a conversation with the LLM. Right. And then. But that had lots of drawbacks so what was the next iteration or the next evolution in terms of. Yes.

Bartek Roszak: Yeah, so. So. So of course like the more advanced rack consider contains like things like re ranker like. So the system that actually takes some model that actually takes the query, takes the retrieve chunks of data like I don't know, 10 most probable or something like that, and then compare them and choose the answer. This is something that can increase accuracy. You can retrain and you should retrain embedder to fit your data. This is also something that the advanced rack can tell me more about that. So yeah, this is like currently we have really nice embedders. So simply models that changes chunks of data, like text data to just embedding. Right. And then to a mathematical representation.

Seth Earley: Yeah, because all LLMs use operate, they're mathematical operators essentially, right. They operate on numbers rather than words and the words are converted to numbers. So. So we convert our text to a vector. Right. And dimensional vector. I get lots of different characteristics and then we bring that into the vector space. That is called an embedding, right? Yeah,

Bartek Roszak: yeah, exactly. So initially we just used the best embedder that were on the market. But like I don't know, one year ago the embedders were not really good. So every time you wanted to build highly accurate rack system, you needed to retrain this embedder with your data simply. So in a way it will put embeddings from question and proper answer close to each other and wrong answer far from each other. Right. So this is the goal of this embedded. Currently we have like new embedded, there's like E5 from Google which is pretty nice, but still you can gain a lot of accuracy if you retrain the system with your data. Right. When you say train it with

Seth Earley: the data, what we've done is we've ingested the content along with metadata, so had a knowledge architecture that provided additional tags or additional features essentially of the content for an enriched embedding. So when you're bringing it in, this is why knowledge architecture is important, right? This is why knowledge bases are important and metadata is important. If you have these things tagged with the right product model. Right. And the right troubleshooting step and the right, you know, whatever the error code is. Right. Like I use that specificity to say an LLM will not know those things unless you tell it. And the way you tell it is by tagging the chunks of content before you ingest it, and then when you ingest it, you're bringing in the additional signals from that knowledge architecture to improve the ability for the LLM to make a distinction between a good and a bad answer. Does that make sense?

Bartek Roszak: Yes, of course. This is, this is one of the technique that you can use to, to, to increase the accuracy. And this is also why you still need like good engineers who are aware of different techniques and can analyze your data check. Which technique could, could be applied here? Because it is not always the case that you can, you can create some metadata or you can filter out some, some data during retrieval process. Sometimes it is possible, sometimes not. You always think like good people who knows where to use which techniques, Right? So yeah, but yeah, this is one of the techniques that can increase accuracy of the embedder system. You started to

Chris Featherstone: pull on a thread earlier around asking about the models and are you taking a foundation model and doing your fine tuning against that? Are you creating your own model? Are you doing continuously fine tuning on top of those models? Because it seems to me, to Bartek that there's this super art former combination of what you're talking about where you have potentially human in the loop, right. Or the validation of the data in the beginning. And then you have this notion of just the ability to figure out what to do with the model itself and what model is going to work the best and all that kind of stuff. And then also then the fine tuning and, or the continuous fine tuning that goes into it. And do you need to add a human in loop? Right? Because I think what, what we're also talking about is it depends on the use case as well on how those combinations need to come together. Will you talk a little bit about that? Because I'm assuming you have some models in production and going from POC to production, you've had to learn a lot of these different types of techniques that need to be applied. I don't know if you have some use cases in mind as examples that look like, in terms of pulling that art form together. Yeah, so like

Bartek Roszak: you said, like every use case is different and you need to apply different techniques. For example, currently for one of our clients, we work with probably the most advanced search engine and podcast environment. So for a company called Podimo, we develop like the search engine. And the one thing that we needed to do there, it's along with the classic Rack system, we needed to create fusion with keyword search, I would say. Right. And the second thing we needed to also take into consideration is the way people usually search podcasts. So they, they, they, they usually do it in a Google like style, I would say. So. So they, for example, write half of the word because Google will know the answer anyway, right? So also you need to cover things like that. So depends on the use case. You need to, to. To. To use different techniques. But more important is that you need to have this users in the loop, right? So you need to check what they actually write to your system, in what way, what they are looking for. And you need to adjust your system to your users, right? Because like for RAC system to be successful you need three very important things. One is accuracy, of course. Second is latency because you can easily create accurate system that will be really really slow. You can put a lot of different models, double checks, everything and stuff like that. But you will wait 15 seconds for the answer, right? And it will make no sense. And the second thing is really nice user interface. These three things is extremely important to make system user friendly and actually to make people use it. Because without it, even if you have highly accurate system, they won't like to use it. And actually people are the biggest barrier right now because sometimes they. Of course there are people that wants to use the most advanced technology and they will use everything even if there is no sense of it. But there's a big group of people that simply don't like changes, right. And to convince them that they should use AI technology, you need to create a system that is accurate, super fast and really nice in terms of interface. Right. I love that we have these like 70

Chris Featherstone: billion parameter models, these 250 billion parameter models and stuff. And to your point on latency, I had a customer I was working with that decided to throw their entire product catalog e commerce site and throw their entire product catalog at the LLM because they might have a search about one product super inefficient. But because the parameter window was that big then they could do it. But this is kind of an add on to that question is. So also how do you balance cost optimization there? Because part of that is I can slam and cram this against these large models all day long. But it's so expensive as well. And I would say for me that is that add on to the user experience is how do you also make it so that it's optimized so that you can utilize it for billions of people or whatever the use case is and not kill yourself. Yeah. So so

Bartek Roszak: of course like quantization is one of the strategy that makes system extremely fast and you can, you can actually apply double checks and. And re rankers and. And other techniques only if you, if you are able to quantize your model and and you still can achieve good results because without this system will be too slow or it will not fit to your memory or you would need to apply like a huge machine that has a lot of memory to handle your system. Right. And if you would like to make your system scalable, serve 2 million users, you definitely need to do quantization of your model and queries and everything. And unfortunately these models are very sparse, so the embeddings are very sparse. So quantization doesn't harm it too much. Yeah, that's good. So

Seth Earley: we talked about naive rag, we talked about advanced rag. Do you want to talk a little bit about modular RAG and your opinion of how that works and, and how effective it is and, and so on. Yeah, so, so, so what

Bartek Roszak: say that this is, this is definite. Yeah, so yeah, the definition. So I think that this is, this is the most unmature type of rack that especially in terms of like a production use case where we can see how it works in production. But yeah, this system contains simply a lot of modules. For example, it not only looks for the answer in your knowledge base, it can search in the, in the Internet, it can combine those two sources, it can, you can add other sources, apply free rankers like yeah, you do fusion from different types of racks actually. And, and, and the issue with it is that it all makes it slow. Right. So, so, so there's not much use cases. I would say that you can sacrifice latency for a higher accuracy. Of course there are use cases like that. But I would say that 95% of our projects are rather system that needs to work really fast. Right. So, so there's no way of doing modular rack in that case. So,

Seth Earley: so we've been using it a little differently. We've used it as kind of template using templated prompts and programmatic prompts that are cycling through data and using modular RAG to ingest lots of different data sources. So it becomes more of a batch processing tool rather than a real time retrieval tool. And what that is using doing is it's using it to improve the quality of data. So we have one use case where we had very, very sparse data on product record and then we enriched it with SEO terminology with a richer product description with additional attributes, categories, keywords, and then made that audience specific. So it was a good use case. But again it's a different use case because it's not, it's talking about using it not in a real time fashion, but more of a batch processing fashion. All those agent approaches too. Right, because you're using often do a particular process. Yeah, yeah. Like the, the, the issue you

Bartek Roszak: describe, like the data described seems very complex. So, so it's probably good use case for moderate rack to, to solve this issue. Right. So tell us a little bit more about

Seth Earley: you, what you do for, what do you do for fun and tell us a little more about your journey of how you, how you got to what you're doing.

Chris Featherstone: Who is Bartek? Go ahead. Sorry I said. Yeah, who is Bartek outside of AI? And maybe you are an outside of AI. I don't think I have

Bartek Roszak: there's any Bartek outside of AI. I would say that my, in my, like, my hope is like, like, like I said, I like to write models on ST change. Right. So to play on the market and stuff like that. So this is actually my passion for a very long time and I'm still writing my own models, applying new techniques. This is also actually the way I'm staying up to date with the current technology. Right. Because now in my role I am far from the coding and I'm more close to the business, but I really like the technology and I need to be close to it to be effective in my current role. So this is also my playground where I can apply, I don't know, new reinforcement learning techniques and, and, and stuff like that. Yeah. So,

Chris Featherstone: so what do you think of then all these AI bots out that are out there? I mean, obviously most of them are going to be pattern recognition, but I love it. You see a lot of rhetoric and a lot of folks that are, that are out there, hey, look at my AI bot. It's made me so much, you know, thousands dollars per month on these bots and everything else. And I can only imagine that it's literally looking at the patterns of the candlesticks and everything else and just trying to see when leading indicators, lagging indicators, all that kind of stuff. I don't know what your thoughts are there. It's just, it's, it's one of the things I watch too and I'm like a little suspect, but at the same time I think you have to build it yourself. Go ahead. Yeah, so I'm in this like industry for a very

Bartek Roszak: long time and I don't believe in any kind of statement like that. I know a lot of people who created like, I wouldn't say bot like that, but strategies that works then they don't tell anybody about it, right? Yeah, of course. The only way you would like to tell more people that you have successful strategies because you want to Take their money and invest their money. Right. But this is for like I would say big hedge funds and stuff like that. So this is of course obvious, but if we're talking about like proprietary trading and day trading and stuff like that, when you just like have your own strategy for yourself, you don't share your strategy simply, you don't share your results. I don't think I know, probably maybe I'm not up to date with the X platform and, and, and the, and the, you know, people who, and their claims. There maybe there are some, you know, superstars that, that just like to be famous, I don't know. But my, my experience is that people who, who knows how to earn money on the market stay silent. Well, it's fascinating. So

Chris Featherstone: thanks for your answer. Quick question. So sometimes I like

Seth Earley: to ask this. If you were able to go back to yourself and when you graduated college, is there any advice you would you provide yourself looking back into life?

Bartek Roszak: Yeah, I don't know. It's, it's, it's a hard question. I did it a long time ago. I don't know if I think current world is different. Right. But yeah, probably the, the thing that is important is that I've, I'm working now in the industry that is totally different than my educational background. Right. And, and this is something that probably will happen to a lot of folks that currently graduate. Right. Because AI like change the way we work simply. So what I would recommend to people is that don't hesitate to change your background if you see some potential somewhere else. Like go for it. Right. This is one thing. And second thing, if you feel that you can do something good also go for it. Even though you are getting reviews many times. Right before I started my career in the New York Stock Exchange, I did a lot of strange things because it happens to me that I entered the labor market during the financial crisis 2007. So it was really hard to get a job in finance that day. So yeah, I needed to do a lot of really, really simple jobs and actually I could stay there because I got. Got better and better on that. But. I know, but I, but I had just like different dreams so I decided to pursue it. And I would recommend anyone to pursue it. Yeah. What's the

Seth Earley: worst job you ever had? The worst job I ever

Bartek Roszak: had? Yeah. Oh my God. Yeah.

Seth Earley: Well, you're coming up with a muffin. I cleaned a muffin factory. I

Bartek Roszak: shoveled. Horseshit job. Yeah. Keep going. Yeah. Okay, so, so I think I was. The worst job I had was I was working in a huge, like, mail center. And for the whole night I was like, packing the big cars with the packages. Right. For the 12 hours. Right. I remember this was probably my first job after I graduate the college. Kind of like working for Amazon today.

Bartek Roszak: Yeah, exactly. It was exactly like that. Yeah. It was very hard. It was very hard. Yeah. So let's see where people can find you. You

Seth Earley: go to stxnext.com, that's the company. And then you are on LinkedIn and it's just B, A, R, T E, K-Rossack R O, S Z A K. That will all be in the show. Notes and Martek, I want to really thank you for your time today and for sharing your expertise. Yes. Yeah. Thank you very much. Thank you

Bartek Roszak: for having me. It was a pleasure. It's been great. And thanks

Seth Earley: to everyone who's listening. And thank you, Carolyn and Chris. And again, Bartek, it was great to have you and look forward to continuing our conversations. Thank you.

Meet the Author
Earley Information Science Team

We're passionate about managing data, content, and organizational knowledge. For 25 years, we've supported business outcomes by making information findable, usable, and valuable.