Thoughts on SC16
We were honored to be presented with an HPCWire Editor’s Choice Award for Best HPC Collaboration Between Government and Industry for work we did with PNNL.
This year’s SC was a big transition. In many ways, 2016 is the Year of Analytics. Supercomputers represent about a $14B market growing with an 8% CAGR, and Analytics surpassed it while growing at 32%. This is a very different environment from when I remember first starting to think about large-scale graph-based Analytics almost 20 years ago, and even when we launched the Graph500 in at SC10.
Despite developing scalable technologies, the HPC community hasn’t found good traction with Analytics, and even this years expanded program felt “safe”. There was machine learning for people who love Linpack and graphs for people who understand meshes. Even the tensor work felt safe.
In many ways we’ve been waiting for the HPC community to embrace large-scale data analysis, but even excluding specific deployments of analytics (e.g., in the hyperscale data center), it now seems likely that HPC will have to live on the technology base created by analytics, in effect becoming a smaller niche inside that larger market and field of inquiry.