News related to recent market trend,Investor,research,travelling,engineering and books.

ads

With data gadgets for rescue workers, scientists want to improve the initial supply in case of major accidents and catastrop...

RESEARCH: Data goggles for first responders

With data gadgets for rescue workers, scientists want to improve the initial supply in case of major accidents and catastrophes with many injured persons. 


Data glasses are supposed to provide a higher supply quality. Doctors and participants in the Aachen test are convinced of this.
Photo: Czaplik / Uniklinik RWTH Aachen
On the ground,to connect a telemedicine to a telemedicine via a camera and thus be relieved, as the results of a research group at the Aachen University Clinic so far show.  In addition, checklists and guidelines are integrated.

At the end of the project, the scientists tested the technology in Aachen in a simulated accident scenario with two derailed wagons and 16 injured people. The advantages and disadvantages of both approaches are to be evaluated later on, for example regarding speed and error rate.
"We know that data gauges lead to a significant improvement. The quality of care is definitely increasing, "explained Michael Czaplik, doctor and scientist at RWTH Aachen University. However, working with the system took more time.

Link:VDI

0 comment:

Olymp the fast calculator: Hardware becomes more energy-efficient and increasingly supports methods of artificial i...

HIGH PERFORMANCE COMPUTING

Olymp the fast calculator:

Hardware becomes more energy-efficient and increasingly supports methods of artificial intelligence. This makes it possible to simulate increasingly complex and, above all, faster simulations of complex processes.

Hazel Hen: The system at the highest performance center in Stuttgart is 95% loaded by customer orders. (Photo: Boris Lehrner for HLRS )
The Top500 list of the most powerful supercomputers is still the benchmark for success in high-performance computing (HPC). The list, created twice a year, is based on a performance benchmark called Linpack. In the meantime, however, there are also other benchmarks, such as the HPCG benchmark, which is based on typical storage operations of commercial software, or Green500, where the computing power per watt is consumed.

The fastest technological advances in the field of computing accelerators play an important role in the top group in the latter segment: graphics processors (GPUs), which facilitate the parallel processing of data and at the same time have lower power consumption than conventional processors. The manufacturer Nvidia is the driving force behind the GPU market. The Tesla-P100-GPU and processors from Intel have given the Swiss supercomputer "Piz Daint" the necessary power to reach the top three (see box).

However, this acceleration must also be reflected in the field of lines between processors, GPUs, and memory. For two years now, Intel's Omni-Path Architecture (OPA) has enjoyed great popularity. It offers high data throughput of 100 Gbit / s on the PCIe bus, low latency and low power consumption in compatible devices. This success points to a growing problem of high-performance technology: pure computing is now usually fast enough, but the connection between the individual components, which is still based on the standards Ethernet and Infiniband, is lagging behind. The PCIe bus 3.0 has not been renewed for many years, only since June 2017 there are new specifications.They should allow up to 128 Gbit / s in full-duplex mode.
The search for energy-saving computer architectures in the various national and university HPC projects has one reason: the enormous energy consumption that future systems will have. Exceptional computing power on the exaflops scale means trillion computing operations per second and high cooling requirements. The industry calls this goal an exascale. "Such systems have an energy requirement of 20 MW to 30 MW", explains Klaus Gottschalk, HPC expert at IBM Germany. This need is already covered today, but at an unacceptable cost. Gottschalk sees therefore an Exascale architecture only from the year 2020 or even only 2023 come.
Michael Resch, Head of the Stuttgart High-Performance Calculation Center (HLRS) and Chairman of the Gauss Center for Supercomputing, does not see any meaning in the Exascale race. Its Cray-XC40 supercomputer "Hazel Hen" delivers 7,42 Petaflops top performers, but the industry and business customers who use the computer wants a real performance, which is currently about 1 Petaflops. Reliability and availability are as important as pure computing performance, according to Resch. In addition, numerous raking projects are carried out simultaneously and in parallel. "Chinese supercomputers are only 15% to 20% loaded according to Chinese data," Resch said at the International Supercomputing Conference in Frankfurt / M. find out."Our Hazel Hen is about 95% loaded all the time."

HPC has a wealth of application fields. At the HLRS, this is primarily the material science. But also the meteorologists profit. For example, in order to better prosecute storms or predict the occurrence of storms, experts in the USA use methods of artificial intelligence (AI). A new model for global weather forecasting is being developed by the University Corporation for Atmospheric Research (UCAR), the National Center for Atmospheric Research (NCAR) and the IBM subsidiary Weather.com. Data volumes of up to 1 petabyte are processed in the main memory using appropriate algorithms and models. The goal is to make predictions more accurate in smaller and smaller regions. The simulation of global climate change can also be realized with these methods,

Nvidia has developed its new V100-GPU especially for applications of the AI. First specimens were handed over to 15 AI research facilities last week. These include the German Research Center for Artificial Intelligence (DFKI) and the Max Planck Institute for Intelligent Systems.The GPU has 21 billion transistors. At Princeton University, nuclear fusion researchers want to use these chips to achieve their accuracy for predicting plasma disturbances of up to 95%. Also the demand of the industry for AI is high, Eg for calculations and simulations in the search for raw materials such as oil and gas.
While the hardware can make significant progress, the software must also keep up. Not every application is suitable for every HPC environment, since the prerequisite is always the parallel processing of data in a computing cluster. HPC has been growing since 2015 with the analysis of Big Data to HPDA (High Performance Data Analytics). At HPDA, the use of AI algorithms is now essential. With regard to this field, Cray presented an Analytics software for Cray-XC supercomputers at the ISC conference with Urika-XC.
The suite includes tools for graph analysis, AI technology, and big-data analysis. "This allows analysis and AI workloads to be run parallel to scientific modeling and simulations, eliminating the time and cost for data migration between systems," says the manufacturer. The solution is already used on the Swiss system Piz Daint, but also on the HLRS.

0 comment: