What’s next in research computing? Good things come in converged packages

SHARE:
Copied!

Collaboration and accessibility are the new drivers of research computing. Historically, supercomputing was driven by very large systems running relatively limited sets of research queries. However, researchers at leading research institutions around the world tell us it’s not how big the super computer is, or even how much power you can put behind a single research case, the challenge is how to make research computing and analysis available to as many research teams as possible in support of open and collaborative science. Nowhere is the power of collaboration and accessibility demonstrated better than the recent breakthrough discoveries like the Higgs boson particle at CERN. Three thousand researchers at dozens of universities, worked together to show the world how this model can accelerate the scientific, academic and medical discoveries behind the world’s most complex problems.

Research computing has come a long way in the last 10 years and is now at an exciting inflection point. In that time, universities have led the move from closet clusters tucked away in a professor’s office or lab, to centralized systems that provide unprecedented capacity through standards-based, scalable architectures that support the inter and intra-university collaboration that is required in today’s challenging research environment.

Universities like the University of Texas and the University of Florida and research institutions like CERN and The Translational Genomics Research Institute (TGen) are groundbreakers in the use of x86 clusters and innovative compute and collaboration models like for groundbreaking research that is generating profound discoveries about origins of matter and of disease.

At TGen, a Dell system has helped reduce the time required to analyze genomic data for a pediatric cancer patient from days to hours in the world’s first personalized medicine clinical trial for pediatric cancer. This is helping physicians identify the most effective treatment for patients based on the genetic make-up of each child’s tumor. The next step is to leverage clouds to simplify collaboration and information exchange between TGen and participating hospitals and create a knowledge base for even more effective treatments.

The best of everything we have learned through collaboration and access manifested itself in the experience accelerating genomic analysis for TGen. And that knowledge led directly to a new solution, Dell Active Infrastructure for HPC Life Sciences, to simplify the analysis of large genomic data sets so that biomedical, life sciences and pharmaceutical companies and research organizations can accelerate innovation.

What’s next in research computing? The future lies in the same hands of those we serve. We are fortunate that the research community is focused on the scientific problems they are solving, while collaborating with us to describe exactly what they need to accelerate that work. Our job is to continue to supply those solutions and we strongly believe that good things come in converged packages like Active Infrastructure for HPC Life Sciences.

Learn more about Dell HPC innovations for Genomics and Life Sciences announced today here.

Continue Reading
Would you like to read more like this?

Related Posts

Click to Load More
All comments are moderated. Unrelated comments or requests for service will not be published, nor will any content deemed inappropriate, including but not limited to promotional and offensive comments. Please post your technical questions in the Support Forums or for customer service and technical support contact Dell Support.