In recent years, as the actual sizes of particle experiments increased, the required computer resources have become enormous. The Belle II experiment requires several hundred PB of storage and more than 100,000 CPU-cores. It is not realistic to operate such a huge amount of computer resources in one location. Therefore, universities and research institutes around the world collect computer resources, connect them via networks, and build a "distributed computing system", which uses them as one huge computer-resource. "International cooperation" is essential to realize a distributed computer system. By building a powerful distributed computing system jointly with Belle II, we can quickly deliver physics results and win "international competitions" in the research worlds. Some of the computers owned by N-lab are connected to computers all over the world by high-speed networks. Simulation data generated by our computers is literally distributed and analyzed by collaborators around the world. In addition, to realize a distributed computing system, it is necessary not only to collect computer resources but also to use advanced software. We monitor the efficient operation of distributed computers around the world and contribute to the development of control systems that can immediately fix problems.
The N-lab Computing system has been performing day and night calculations to discover unknown phenomena in the large amounts of our experimental data.
You can get an overview of recent major updates (2021) here (Japanese).