We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Building the Automated Digital Lab of the Future

Mark Fish presenting his keynote talk, in front of a large screen.
Mark Fish presenting his keynote talk. Credit: Lab of the Future Congress Europe, Unmask Photography.
Read time: 7 minutes

At the Lab of the Future Congress Europe 2025, professionals from across the biopharma, drug discovery and informatics industries gathered to share their research and discuss their vision for what the “Lab of the Future” could look like in the coming years.


Speaking in one of the first keynote talks of the event, Mark Fish, VP & GM of Digital Science and Automation Solutions at Thermo Fisher Scientific, outlined the digital transformation that many research labs are undertaking and set out his vision of an AI-native, automated digital lab that is designed for openness and collaboration.


From these digital foundations, Fish told the audience how it might be possible to “reimagine the scientist experience” for the modern era, highlighting Thermo Fisher’s recently announced partnership with BenchSci, a pioneer in AI for life sciences research, to develop new AI-powered research tools for scientists.


To learn more about the motivations behind digital lab transformation, the power of AI for life sciences research and the importance of industry collaboration, Technology Networks spoke with Fish at the Congress.

Alexander Beadle (AB):

In your keynote, you mentioned that there is a lot of pressure these days to do more, with less. What does that look like in the biopharma and drug discovery space? 


Mark Fish (MF):

Obviously, different companies are in different stages here, but in general, I think it does come from pressure on the healthcare system. The cost of bringing a new drug to market is very wide-ranging and, of course, depends on the modality and the indication that you're aiming for. But it's just not sustainable when you equate that to healthcare, particularly as research is increasingly about rare diseases, so you are effectively recovering that investment on a smaller population of people; it becomes less and less tenable.


Tackling the challenge of R&D productivity is an industry problem that is putting pressure on pharma companies. Not only are there financial challenges, there is a post-COVID element to it where people’s expectations have shifted; they expect treatments and cures to come at a much faster pace than ever before. And certainly at the executive level, they are trying to meet that market demand. A lot of companies have very bold strategies for 2030 growth, but they need to do that without significantly increasing their cost base. That is the pressure – it’s a case of asking: How do we go fast? How do we increase the flow through the pipeline downstream to launch? How do we get more new molecular entities out to market at a faster pace to sustain or grow stock prices and meet shareholders’ ambitions? 


The reality is, a lot of the innovative scientific work that drives life science companies value gets performed in the lab today, and that's why lab transformation is starting to become an executive challenge in businesses, especially as it is an area that’s long been underfunded and overlooked. Much of the work takes place in the lab, so transformation has become a leadership challenge – one that has bene overlooked and underfunded.



AB:

How can AI-forward digital transformations improve this issue?


MF:

Socially, we already see people starting to use AI in their everyday lives to get rid of mundane work, plan a holiday, get lifestyle advice or things like that. People are seeing the practical benefits of AI’s ability to pull together different data and structure the start of a plan.


People are now trying to figure out, “How can it help me as a research scientist?” For instance, I am looking through literature and I have a hypothesis – rather than having to look through hundreds of journals or search to find the right journals to look in, AI has done all that reading for us and it can put that information at our fingertips. It gives us a plausible and, in many cases nowadays, an explainable route to an idea.


Another example that I’d throw in there – and this isn’t that new for life sciences, there has already been a lot of work on digital twins – but for modeling, generative AI is really just the next stage of evolution for that. Most reactions can be modeled in silico before the synthesis is done on the lab bench; retrosynthetic machine learning algorithms are already fairly well established. We also see this in biologics nowadays with models of cells. Say you want to build a monoclonal antibody – the structures you need have all been calculated by AlphaFold, and now people have digital twins of Chinese hamster ovary cells that can actually express that protein. We’ve been building digital twins of the real world and that lets us reduce the amount of real experiments we need to do in a physical lab – it’s a shift from the wet lab to a dry lab.


As we dig deeper into biopharma and we understand more about genetics and biology, we’re starting to understand that it’s all about machinery and mechanisms, and now we’re starting to understand those mechanisms to a degree where we are able to model them. The secrets of the cell are being unlocked and with simulations and modeling, we’re able to make use of this in a way that would have been impossible, even computationally, a decade or so ago.



AB:

It’s important to define what we are actually talking about when we mention AI here. How would you explain that to someone who is perhaps not familiar with AI outside of the likes of ChatGPT and other large language models (LLMs)?


MF:

Starting with LLMs, what they do – and what computers are really good at – is read words very quickly. They can process words and books at a tremendous pace and what they look at is really the probability of different words and concepts being linked. At a more technical level, all those words and the connections between them get loaded into things like vector databases with probability weightings, and from there you can start to create neural networks and things like that. But at its core, it is basically looking at the probability of one word following another. So it can look at a question, deconstruct it and look at what the probable answers are to it. This is very probabilistic, but science is often black-and-white.


That is where semantic engineering comes in. If you take all of this data and those papers, and you start to identify the components of it that are important for the science, you start to build up a vocabulary of an experiment. It has materials, reagents and conditions. That training on the connectivity between words and what they mean scientifically is a knowledge graph.


Knowledge graphs

In artificial intelligence, knowledge graphs represent a network of real-world entities – such as objects or situations – and illustrate the nature of the relationships between them. Knowledge graphs can be used to add context to more data-driven AI techniques, facilitate the integration of data sources and serve as a bridge between humans and machine systems by generating human-readable explanations.


You can train it to know that certain chemicals may have multiple names, that the letter “M” can also mean “male”, that there are many words that can have alternatives – that is semantic engineering as well. A chemist, when they read a paragraph, will just know that a certain functional group is the same as another functional group but just named in a different way. With a knowledge graph and semantic engineering, you’ve given AI that same scientific basis. It does need human input, of course, because you need somebody to build up that controlled vocabulary and link those concepts in a knowledge graph. But this allows scientists to effectively “pass on” their knowledge in a structured way.



AB:

As you identified in your keynote, one of the cornerstones of building the automated digital lab of the future is having an open scientific ecosystem. What would that openness look like?


MF:

From a technical point of view and from our perspective, openness means looking at standards and making sure we engineer our tools to comply with those standards and make integrating equipment as easy as possible. Where there are no standards, it means being open about the way we are doing things and sharing technical information openly.


What we all want is to achieve faster science and spend less time wiring A to B. If organizations do get more open about sharing technological approaches, design patterns and even code, sharing what works and looking at things like ontologies, that is another form of standardization. We can start to define those standards that help achieve an outcome – and I think that is the really important thing here, focusing on the scientific outcomes that we are trying to achieve and solving them together as an industry. That includes technology providers, of course, but also customers themselves. That is why events like the Lab of the Future Congress are fantastic, because the quality of the talks has been phenomenal and we are putting down really strong scientific foundations.


Additionally, I think we’ve seen a change of pace in the industry that is quite unprecedented. COVID started this wave, where we realized that just throwing more people at a problem is not a sustainable solution. Those heroic efforts of people working super-long shifts and adapting to home working – there was so much change there and we achieved amazing things, but it was almost like being on wartime footing. Now we are trying to figure out how to work with things like generative AI, machine learning, robotics and automation. If we’d had more time to invest in these things before COVID, the industry would probably have been better prepared. That is a lesson – we can’t just work harder, we have to work smarter. That can mean bringing physical automation in, looking at how you manage your data and applying AI or machine learning or generative models to drive science faster. We are at a tipping point where we’re going to see some fantastic evolution.



AB:

Lots of labs are still very invested in their legacy systems. Many might also be hesitant to undertake a digital transformation while they are in the middle of important ongoing research projects. Do you have any advice for labs such as these?


MF:

It really comes back to understanding what the benefits are. In many cases, it may not make sense to take an existing application and re-engineer it to work with AI in the cloud, but in some cases, it will. It is a case of clarifying your problems and the business benefits of solving them. In some cases, it may be better to leave a particular customized instrument that does one particular analysis and runs bespoke software alone and just accept it how it is. But there are things we can do to make its connectivity smarter, perhaps with image recognition or looking at how you might retrofit Internet of Things technology to this type of equipment. You have to look at where the minimum effort creates the maximum benefit for the largest number of people and start there. 



AB:

Thermo Fisher will shortly be introducing the Thermo Fisher Connect Platform, Enterprise edition. Can you tell us more about this? 


MF:

We’re very excited about the Connect Platform. We are releasing version 4 of the platform in November and it is an important release that brings together a lot of hard work. We’re excited to be scaling our offering and our strategic partnerships. Partnership is very important to us. Accelerating science really needs an ecosystem of innovation that embraces open standards, collaborations, and new ways of working. For our clients, it’s an imperative to accelerate science, and labs, along with people, are at the heart of this change. We are starting to demonstrate how you can bring AI technology to the lab, powering up scientists with a type of scientific assistant to help with research and drive that forward into things such as hypothesis development and experimental planning. From our point of view, our platform is ready to enable these AI tools. We’ve got a fantastic partnership with BenchSci and the ASCEND™ platform to really demonstrate to clients the value of having that assistant for their experiments and how it can accelerate their workplace. I am passionate when I see our teams working side by side with customers and the wider industry to lead the beat of innovation and help science move faster, smarter, and further.