Get Quote

Guangdong Giant Fluorine Energy Saving Technology Co.,Ltd

News

  • The Newest fire extinguishing agent - perfluorohexanone!
    The use of water as a fire extinguishing agent in high-value asset storage areas where electronic equipment operates and cannot be replaced can be as devastating as a fire. They can be protected with a custom detergent system to quickly extinguish fires and protect sensitive equipment without harm to people or the environment. The core of the system is the revolutionary perfluorohexanone fire-proof liquid. Perfluorohexanone is stored in a cylinder as a liquid and evaporates as soon as it is discharged, completely flooding the protected space and better absorbing heat than water. The perfluorohexanone system suppresses the fire by detecting it at an invisible level before it can start burning. Once the danger has passed, perfluorohexanone will evaporate rapidly without damaging any valuable assets. Perfluorohexanone fire extinguishing system has become the most effective fire prevention measure in the market. These systems are particularly suitable for extinguishing fires in areas where non-conductive media are required, where electronic systems cannot be shut down in an emergency, where cleaning of other reagents constitutes a problem, and in areas where non-insulating media are usually required. In NFPA 2001 and ISO 14250, perfluorohexanone fire-proof liquid is called fk-5-1-12. It is a kind of fluoroketone (or fluoroketone) dodecfluoro-2-methylpentyl-3-carbon, fluorine and oxygen compound, and the chemical structure of cf3cf2c (o) CF (CF3) 2. It is a transparent, colorless, low odor liquid, overpressured by nitrogen and stored in a high-pressure cylinder.

    2020 06/02

  • China's data center consumes the top three Gorges hydropower stations! Suggested by Guan Hongming, President of Dawning Cloud Computing Group
    The state promotes the construction process of "new infrastructure" to bring benefits to the development of big data center. How to speed up the construction of big data center, what challenges are big data center construction facing at present, and how to avoid misunderstandings in industrial development? On these issues, Guan Hong, President of Shuguang cloud computing group, accepted an interview with a reporter from China Electronic News tomorrow. Big data industry enters the period of scale growth "At present, the development of China's big data industry has entered a period of rapid growth of industry scale. With the increasing attention to" new infrastructure ", the big data industry will also usher in new opportunities for development." Referring to the current situation of big data center, Guan Hongming said that with the in-depth development of mobile Internet, Internet of things, cloud computing industry and the accelerated implementation of big data national strategy, all industries are in-depth excavation of the value of big data, research on the in-depth application of big data, the volume of big data industry shows an explosive growth trend, and the application fields of big data continue to be enriched. China's big data The industrial ecosystem is becoming more and more perfect. In this context, as an infrastructure, big data center has been growing continuously. Accelerating the construction and use of new infrastructure such as big data center will have a significant impact on information industry, manufacturing industry, energy and public utilities, financial services, transportation and other industries. Typical applications such as Internet of things, Internet of vehicles, industrial production and remote service are expected to achieve accelerated growth. The application scenarios of big data in Internet, finance, communication, city, medical treatment, agriculture and other industries are also deepening, and the upstream and downstream industrial chain closed loop based on big data mining, transmission, calculation and use is forming. In the future, with the continuous expansion of the digital economy and the concentration of industry resources, big data center will play an increasingly important role in China's economy, and the driving role for the upstream and downstream of the industrial chain will become more and more obvious. Big data center construction faces energy consumption challenge Guan Hongming said that the construction of big data center has broad market demand, but it faces energy consumption challenges. More than half of the energy consumption of traditional data centers is used for cooling. In 2017, the total power consumption of China's data centers was 120 billion-130 billion kwh, which exceeded the total power generation of the Three Gorges Dam and Gezhouba power plant in 2017. According to IDC forecast, the power consumption of China's data center will increase to 296.2 billion kwh by 2020 and 384.22 billion kwh by 2025. With the rapid development of the IT industry represented by 5g, big data, edge computing, etc., the power density of the single cabinet of the data center server is increasing, and the traditional "air cooling" will further face the problem of heat dissipation bottleneck, so reducing the cooling energy consumption becomes the key to the development of the green data center. To establish a sound green data center has become a trend, liquid cooling technology has become an ideal choice. China Science and technology dawning will continue to strengthen scientific and technological innovation, vigorously develop green computing industry, and promote new infrastructure construction such as "liquid cooling green energy saving data center". In addition, the expansibility and information security of big data center are also facing a series of challenges, mainly reflected in: the complex system leads to low efficiency of operation and maintenance, and various storage, computing and information resources are difficult to share. With the increase of data volume and business load, it is necessary to consider how to ensure the scalability to meet the business needs; as the core of enterprise IT system, big data center has become more and more concerned about information security. As for the next step to promote the construction of big data centers, Guan Hongming said that first, he hoped that the government could guide and standardize the construction of data centers in key industries and large enterprises, promote the rational layout of large data centers and build special regional data centers around the concept of innovation, coordination, green, open and shared development. Second, we hope to encourage the government to cooperate with enterprises and social institutions to build a big data center relying on professional enterprises on the premise of ensuring security through government procurement, service outsourcing, social crowdsourcing and other ways. Third, it is suggested to properly guide the establishment of big data industry development fund, strategic emerging industry investment fund, etc., and support the green data center pilot, cloud service platform and other big data application infrastructure invested and constructed by enterprises.

    2020 03/20

  • How to Ensure Data Center Uptime and Electrical Safety in the Event of a Disaster?
    When disaster strikes, many critical facilities can face catastrophic consequences, but data centers are particularly vulnerable. Whether it is an enterprise on-premises data center, a hosted data center, or an edge data center, a large amount of business-critical information that it has cannot be hosted to other facilities. Therefore, in the event of a power outage, other facilities will face significant business consequences. The electrical equipment that powers the data center is also unique. So, although the consequences of a data center outage caused by a disaster are serious, so are the potential risks of electrical safety issues. The following research studies the impact of disasters on data centers from two perspectives: power outages and electrical safety: Disaster-related downtime costs With some recent major disasters in the United States (from hurricanes in the Gulf of Mexico to forest fires in California), now is a good time for data center operators to understand how such disasters can affect their operations when they cause disruptions. Uptime Institute's 2018 evaluation survey provides insights on power management trends and current challenges, with a focus on the data center. The survey report found a worrying trend of rising power outages. The number of infrastructure power outages and [severe service quality declines" increased by 6% over the previous year. 31% of respondents said they experienced in their own data centers. Power outage. Power outages in data centers can cause huge revenue losses to businesses. A recent IT Intelligence Consulting (ITIC) study found that 81% of companies in 47 vertical markets estimate that their average hourly downtime cost (excluding catastrophic downtime) is more than $ 300,000. More than 33% of companies said they would lose $ 1 million or more per hour of downtime. Although each industry faces its own set of challenges, data centers are special in that their expectations of 100% uptime are directly related to the need to access business-critical data, and any loss of access may occur beyond others Consequences of business scope. The threat of major power outages highlights the need for power backup solutions to protect and minimize the impact of downtime. Key components of backup power system To prevent these high costs and keep systems up and running, data centers need an integrated power system for power management and disaster prevention. Start with one or more uninterruptible power systems (UPS) (usually deployed in conjunction with backup generators and power distribution units) to ensure reliable power is provided during power outages and to keep critical IT assets running. These systems help businesses avoid data loss and hardware damage by providing the availability of networks and other applications during power events. As the trend toward a hybrid cloud environment continues to evolve, surveillance software has now become an important part of power management systems. In addition, some enterprises have implemented virtualized infrastructures that can be used in conjunction with power monitoring software to simplify and maximize their ability to manage power with a low probability of disaster or other events. By combining power management solutions with common virtualization management platforms, such as those from VMware, Cisco, NetApp, Dell EMC, HPE, Nutanix, and Scale Computing, businesses and their IT teams can expand their Usability. This feature allows teams to remotely manage physical and virtual servers and power management devices from a single console. In the end, data center operators need to know what power management technologies are in their infrastructure and whether these solutions can meet their reliability needs in the event of a disaster. Adopting the right power system can mean the difference between business continuity or lost revenue for thousands of dollars. Security is imperative When data centers prepare for disasters, electrical safety may be overlooked. There are several reasons for this. Companies often rely on professionals to install electrical equipment, and even electrical equipment manufacturers themselves to ensure the safety of their infrastructure. But the reality is that every organization has its own role, especially the data center operator. Data center electrical systems are usually designed for functionality, aesthetics, ease of maintenance, efficiency, and security, but due to the many competing priorities (not to mention many other responsibilities faced by data center operators) Security doesn't always get the attention it needs. The first and most important step is to take the time to understand the unique environment and challenges that a given location may face. This may include reviewing current distribution assets and reviewing critical load analysis, generator connectivity, availability, and fuel sources to determine where the risks occur and how to address them in the event of a disaster. In addition, you must have an up-to-date one-line diagram of the facility's power distribution system. To ensure that security is a top priority, it may be helpful to consider ways to modernize or update specific equipment that may become unsafe during a disaster and take advantage of these opportunities for change. After that, data centers can implement emergency continuity plans within their facilities to identify qualified personnel. They can then use the data to enable employees to quickly and safely reduce harm by isolating hazardous devices or placing them in a secure location that restricts access to unauthorized employees. The business team must ensure that the continuity plan is communicated to the appropriate data center staff and service personnel, and disaster drills are performed so that employees can respond effectively. As with the backup power plan, electrical safety requires a holistic approach to the operation of the facility. The structure, piping, HVAC, and other aspects of facility design play a vital role in safety and can be hazardous if not taken into account in overall disaster planning. in conclusion Disasters can happen at any time and can have many adverse effects on the business operations of an enterprise. Data center operators need a comprehensive disaster preparedness strategy that includes both the technology used to prevent outages and the procedures, protocols, and personnel responsible for ensuring electrical safety. With the right methods and plans in place, data center operators can minimize the impact of disasters on personnel safety and overall health of the business.

    2020 01/14

  • "Qualitative Big Data" Research Improves Public Policy Making
    In December 2019, the American Enterprise Institute released a report, "American Family Diaries: Can Ethnographic Research Assist Public Policy?" ". The report includes the latest research using qualitative research methods such as ethnographic observation and in-depth interviews to investigate the causes of poverty and impediment to social mobility, and explores how qualitative research can be used to improve public policy development. Our reporter interviewed relevant scholars on the characteristics of qualitative and quantitative research methods and their application in social science and policy research. Two revolutions in policy research Aparna Mathur, a researcher in economic policy at the American Enterprise Institute and Jennifer M. Silva, an assistant professor of sociology and anthropology at Bucknell University, said in the United States, Policy research on poverty and social mobility mainly uses quantitative research methods that rely heavily on large-scale representative surveys and administrative data. Quantitative research focuses on a large and diverse set of information about people's beliefs and behaviors, and aims to generalize numerical relationships between variables to predict future behaviors. According to Robert A. Moffitt, a professor of economics at Johns Hopkins University in the United States, there have been two revolutions in data collection and analysis in social science research, especially policy-oriented research, in the past 50 years. . The first revolution was the development of large-scale, population-representative household surveys, such as the current demographic survey and the National Health Interview Survey, which collected hundreds of key socio-economic variables. Estimate statistics and correlations at the total population level. Combined with household panel surveys (multiple visits to the same sample family at different points in time), these surveys have completely changed researchers' understanding of employment, income, family structure, and migration dynamics of low-income Americans. The second revolution was the development of large-scale computing technology and large-capacity storage capabilities, which enabled high-speed download, storage, and analysis of data. The combination of individual-level household surveys and rapid analysis methods has created an unexpectedly massive accumulation of knowledge. Related to this is the rise of social experiments and randomized control experiments. The proposal for social experimentation originated from the research on negative income tax by Guy E. Orcutt and other econometricians in the 1960s and 1970s. Random control experiments require large-scale data collection, large-scale samples, and appropriate calculation methods. They are important tools for examining the impact of social policies on low-income people. They are generally considered to provide the best causal effect on a policy Evaluation. Both quantitative and qualitative have their own advantages and disadvantages Hu Shouping, founding director and chair professor of higher education at Florida State University's Higher Education Success Center, told our reporter that quantitative, qualitative, and quantitative-qualitative hybrid research methods are the basic methods of social science research. Quantitative research methods are regarded by the general public and many decision makers as more scientific and objective methods, while qualitative research methods have to spend time and energy to justify their science and value. Generally speaking, the quantitative research method is more suitable for more mature research fields, and it is more direct and powerful for the verification of research hypotheses. The qualitative research method is more valuable for the research fields that people know less, and it is conducive to generating research hypotheses. Quantitative research methods are conducive to verifying causality or to produce general and universal conclusions. Qualitative research methods are conducive to the understanding of unique individuals and unique phenomena. Quantitative and qualitative research methods are widely used in the field of humanities and social sciences, but they also vary depending on the academic tradition of the discipline and specific research issues, and are also affected by factors such as research purpose, time resources, and research funding. Generally speaking, small-scale research projects tend to use one of qualitative and quantitative methods, and large-scale and well-funded projects tend to use quantitative and qualitative hybrid research methods. Hu Shouping said that the pros and cons of policy formulation depend on the rigor, scientificity, comprehensiveness and matching of the evidence used. Qualitative research methods emphasize the perspectives, feelings, and experiences of the parties, and also pay more attention to the vulnerable groups. They have a positive effect on the rationality of policy formulation, but because the parties are limited by their own interests, vision, and goals, it is necessary to be critical in some cases. Interpret and use qualitative research. Quantitative research results related to vulnerable groups may be obscured by "trends" in the data, while qualitative research can make up for this deficiency. In policy evaluation, quantitative methods can help establish or deny causality, and qualitative methods can help further understand why causality exists. In short, various research methods have their advantages and disadvantages, and should comprehensively consider factors such as specific research problems, goals, resources, time and environment. Big data gives birth to "qualitative big data" The rise and rapid popularity of big data has profoundly affected the scientific research process. Hu Shouping said that with the rapid expansion of data collection channels and capabilities and the improvement of computing power, big data has become a hot area for quantitative research, which will have more or less impact on qualitative research. The public may be more convinced of the science and practicability of big data and quantitative research methods, thus ignoring the rationality and contribution of qualitative research methods. Decision makers will also be more inclined to use big data and quantitative research findings as the basis for decision making and ignore qualitative research. result. Big data has created conditions for the broader application of quantitative research methods, and the two appear to be more "adapted", but the purpose, advantages, and applicable fields of qualitative research methods will not become obsolete or disappear. In contrast, qualitative research can reduce the negative consequences of vulnerable research based on quantitative research based on big data. Of course, due to limited resources, the popularity of big data may make it more difficult for traditional qualitative research projects to obtain resource support. Lynn Jamieson, a professor of sociology at the University of Edinburgh, and Sarah Lewthwaite, a researcher at the National Research Methodology Centre of the Economic and Social Research Council of the United Kingdom, suggested that one shortcoming of qualitative research is that It is difficult for researchers or even teams to process large amounts of qualitative data quickly. Traditional, rigorous analysis methods require researchers to be "immersed" in the data to a certain extent, and even with software, it still takes a long time. This question aroused the interest of social scientists in big data and the preference of using quantitative methods to analyze quantitative data sets, which in turn gave birth to "qualitative big data" that combines the breadth of quantitative research with the depth of qualitative research. According to Hu Shouping, "qualitative big data" refers to a qualitative database collected and constructed by applying qualitative research methods on a large scale and multiple angles. It is a new concept that coexists with quantitative big data. It has the advantage of observing and understanding the interactions of members in the system and the interactions between individuals on a large-scale, multi-angle, and all aspects, creating conditions for a more comprehensive understanding of the impact of individuals, organizations, systems, and policies. According to Mather and Silva, the fundamental goals of quantitative research include universal applicability, efficiency, reproducibility, and transparency, which "coincides with" emerging big data-the purpose of big data analysis is to make the digital Speak "to show and predict human behavior patterns and trends in an unbiased manner. Although the size of quantitative data sets continues to expand and cover more areas of human experience, data analysis technology is also becoming faster and more complex, and social science researchers and public policy experts are increasingly interested in qualitative research. Qualitative research is based on "discovery logic" and attempts to look at the world from the perspective of the investigator. The purpose is not to prove or falsify a theory about behavioral driving factors, but to start from the meaning system that people create and share in daily life. Propose a new theory. Mather and Silva emphasized that researchers "immersed" in the lives of the investigators in order to unveil the "world of meaning" that drives their behavior and guides their decisions, and to unearth the driving mechanism of the demographic model displayed by quantitative data. In the era of big data, researchers should seek to use all research methods in an optimal way to fill gaps in existing knowledge and provide better policy references, rather than sticking to one method.

    2020 01/09

  • Singapore Research Team Studies Local Chinese News Agency Network Tthrough Big Data
    According to Singapore's Lianhe Zaobao, the research team led by Ding Hesheng, the director of the Chinese Department of the National University of Singapore, and senior researcher Xu Yuantai, visited local conference halls, temples, and cemeteries in recent years, and began to collect and code large projects The goal is to build a large database of personal and personal relationships among the eight generations of Singaporean sages from 1819 to 2019, and to compose the Singapore Social Network of Singapore with big data. The local Chinese community research not only focuses on Chinese leaders, but also looks at the "little people" in history, and further composes a network of Chinese community leaders and low-level social figures through big data. In 2017, the team began using the Geographical Information System to capture Singapore's history. Since February 2019, the [Singapore Biographical Database", which was jointly created by the National Library of Singapore, Singapore Association of Singapore Clan Associations, and the Chinese Department of the National University of Singapore, has officially launched. The leaders of the Chinese News Agency and their expanded social network have brought research to a new stage. Xu Yuantai pointed out that the establishment of the database is based on "big men" and has compiled the information of about 1,000 Chinese leaders. Their plan is to build a large database of personal and personal relationships among eight generations of Singaporean sages from 1819 to 2019. Taking every 25 years as a stage, the team is currently collating the data of about 50,000 people included in the "Singapore Chinese Inscription Compilation 1819-1911". Xu Yuantai said that the 50,000 people are a huge network, collected from halls, temples, and inscriptions. Most of them are unknown little people. Important elements are often ignored by the academic community. " Collect new information from the tombstones in the special issue cemetery In addition, the team is also developing new first-hand materials, including the special issue of the halls collected from the various halls, and the death records copied from the tombstones. After the team identified and digitized, the records were collected in 1922 By 1972, there were more than 62,000 names. "At present, the academic community has never included this batch of data in the research area." Xu Yuantai said that all the records of the tombstones at Bukit Brown Cemetery were retained and provided by the State Administration of Cultural Heritage. Ding Hesheng and Xu Yuantai also took students to Bukit Brown Cemetery in person to look for earlier tombstones. More than 1,500 names of the Qing Dynasty have been found. Among the tombstones found, the oldest was carved in 1824. Xu Yuantai revealed that the team has set up an independent database for these tombstones in the past two years, and will also be integrated with the "Singapore Biographical Database" in the future, allowing the geographic information of tombstones and personal information of people to be combined to open up new research perspectives and direction. The team also cooperated with the Singapore Genealogical Society and collected more than 100 genealogical materials, mainly from Fujian and Guangdong, including families from Chaozhou. Xu Yuantai said that a large number of immigrants migrated to places such as Singapore and Taiwan during the Ming and Qing dynasties. Through the study of genealogy, we can see the correlation and differences between the immigrants separated from the two places. Xu Yuantai also pointed out that the character research in the United States and Taiwan mainly focused on "officials", that is, intellectuals. "However, in the early days of Singapore and Malaysia, there were mostly businessmen and workers. They often kept their names by building temples and halls. The starting point for our research is therefore very different. Of course, there are a lot of problems and difficulties, and it is not easy to build such a database. " Zhang Wenbo (23), a master student of the Chinese Department of the National University of Singapore, and Deng Kaien (21), a second-year student of the Chinese department of Nanyang Technological University, are part of the team and participated in the research of temples and tombstones. Deng Kaien said that the field investigation process is very rigorous, and the information on the tombstone must be carefully classified, identified and proofread. The compiled data must be further coded before it can be included in the database for analysis. Xu Yuantai believes that this research can help understand the local community's footprint and interaction process in the local community. "Singapore's halls and temples are constantly moving, many records are being lost, and many people's names are also duplicated. And coding can help us connect a network to see what the human brain and the naked eye can't see and ask new questions. "

    2020 01/07

  • The Application of Hyperscale Data Centers will Drive the Global Cloud Computing Revolution
    Data is now the foundation of the new economy. The ever-increasing data has driven the development of very large-scale data centers, which serve as very large mission-critical facilities, which run servers that process huge data. More and more workloads are being integrated into larger and more efficient data center facilities. It has become commonplace for ultra-large-scale companies such as AWS, Google, Facebook, Microsoft, and Apple to invest $ 1 billion to $ 3 billion in data center parks. Hyperscale data center providers are seeking to expand data center capacity to serve growing users in Europe, Asia, South America and even Africa. This trend is part of a further intensification of the US IT infrastructure, which will include many new or new data centers built in unexpected places. The rise of hyperscale computing is related to the business and its relationship to data and IT operations. Many companies don't want to spend millions of dollars to build and operate data centers. For them, this is not their core competitiveness. Therefore, data flows from small and medium-sized data centers and IT equipment rooms of enterprises into the largest and most efficient data center facilities in the world. These facilities are designed to easily add more servers and power capacity as the workload grows. The rise of hyperscale computing has created a new paradigm for data center business and has also changed the landscape of suppliers and customers. Hyperscale companies have become the largest customers for leasing wholesale and customizing data center space on demand. As a result, these customers have a huge influence in data center development, and data centers have quickly adapted to greater demand. This digital transformation will create a distributed infrastructure from data centers to the edge of the network. Here are four main drivers that directly impact the growth of hyperscale data centers:•cloud computing •social media •software platform • Content delivery Understanding the landscape of hyperscale data centers To fully understand the hyperscale data center market and its participants, it is important to understand that the business is comprised of two tiers of companies, including: • Tier 1: cloud computing service providers and social media companies, such as ultra-large-scale operators, such as Google, Microsoft, Amazon, Facebook, Apple and other industry giants. The power capacity of its data centers ranges from 10 MW to 70 MW, including multiple data halls in a single building, multiple data halls in multiple buildings, from tailor-made projects to multi-storey buildings. Data center campus. • Tier 2: SaaS vendors, platform companies and hosting providers: This layer includes companies such as Oracle, Baidu and China Telecom, as well as SaaS vendors (such as Salesforce, SAP, Workday, Paypal, Dropbox, Dropbox) and platform vendors (such as Uber and Lyft). These customers only need to build and operate a data center with a power capacity of 2 to 5 MW at a time. Regardless of the level of business, they must decide whether to build or buy, so there are many factors that affect decision-making. For Tier 1 companies, very large enterprises can build their own data centers, rent space from wholesale data center providers, or collaborate with developers to get custom solutions. These decisions are usually made by cost and time. These companies need to consider these questions, such as how long can they get enough capacity and how much can they afford? Tier 2 companies with smaller power capacity requirements prefer to lease wholesale data center space and work with data center developers to plan for long-term growth. For these very large companies, building good partnerships is a key measure. How is a hyperscale data center different? By volume, hyperscale data centers are currently less than 10% of global data centers, but they dominate new investments in infrastructure and servers. Factors such as the impact of hyperscale customers and the geographic location of hyperscale data centers make them unique in the data center industry. The latest hyperscale data center report released by industry media quotes statistics from Synergy Research, whose data highlights that the capital expenditures of 20 hyperscale providers worldwide have surged 43% in 2018 to nearly $ 120 billion. So what makes hyperscale data centers special? Prior to 2016, the capacity of enterprises to lease wholesale data centers rarely exceeded 10 MW. In 2018, a market research report released by Jim Kerrigan of the North American Data Center revealed that 11 lease transactions were above 10 MW, including a 72 MW lease transaction in Northern Virginia. One can imagine that these transaction sizes require different approaches. Over the years, turnkey data center IT centers built by data center providers have slightly more than 1 MW of IT capacity, with space around 10,000 to 12,000 square feet. Today, people are seeing data halls from hosted data center providers ranging from 30,000 to 60,000 square feet. The hyper-scale trend of data halls has prompted some companies to optimize their construction processes and supply chains to reach these ultra-large-scale data center lease transactions. The design and construction of hyperscale data center facilities differs from traditional enterprise data center space in many ways, including: • Choice of real estate and address: Hyperscale data center operators are growing faster than other companies. The data center parks being built by data center real estate investor REITs have illustrated this trend, and these data center parks typically have a power capacity of 100 MW to 150 MW. • Power: Power sustainability is a key operational measure for most large hyperscale operators, and more and more data center providers are forming development teams that specialize in the complexity of the energy market. • Power infrastructure: Hyperscale data center operators are also exploring innovative power infrastructure approaches, such as centralized UPS power systems that enable data centers to operate with lower energy use efficiency (PUE). • Software-centric resilience: Cloud computing is changing how businesses achieve uptime and brings new architectures that use software and network connections (including cloud platforms to use availability areas) to create more resilience. • Cooling methods: Cooling has always been the focus of hyperscale data center optimization, and some of them have adopted indirect air cooling, membrane evaporative cooling (Facebook), cooling chip water (Google) or back door cooling device (LinkedIn). • Data Hall: Wholesale providers have moved to a larger data hall, which includes 35,000 to 85,000 square feet of space and can support up to 9 MW of power capacity. For the geographic location of the construction of ultra-large-scale data centers, the main cloud computing parks are usually located in remote areas. For example, the data center construction boom has set off in rural areas of Oregon, Iowa, and North Carolina. Industry experts said that, simply, people are at the forefront of a tremendous change as the supply of computing resources shifts from expensive and complex to simple and cheap. However, with the development of cloud computing and the growth of new workloads, large data centers are being developed on the outskirts of major U.S. cities, including technology-centric population centers such as Phoenix, Dallas, Chicago, and Northern Virginia. Similar markets. Service providers and hyperscale computing The explosive growth of hyperscale computing has not only impacted data center users, but data center service providers have had to change the way they segment the "service provider world." Wholesale managed data center vendors in the market for most hyperscale resellers may struggle to implement supply and construction operations to meet new expectations for cost, speed, and scale. For data center service providers and developers, these are just some of the ways that hyperscale customer demand and related trends are changing the game: • Transaction size • Vertical construction • Land reserve • Quick time to market • Transaction structure Next steps for hyperscale data centers According to the latest research on the hyperscale data center market, the next 10 to 15 years will be an era of continuous development, which is defined by the following two general themes. • Cutting-edge innovation, with technology companies and service providers racing to deploy and commercialize new technologies, such as artificial intelligence, the Internet of Things, augmented reality, 5G, and autonomous vehicles. • Back-end industrialization as new investments streamline the global supply chain, bringing new levels of speed and efficiency to the delivery of hyperscale data centers. In the next few years, the edge computing market will continue to grow slowly and then accelerate. M & A activity between cloud computing giants and data center providers will accelerate. As higher power densities require alternatives, liquid cooling technology may be used more and more intensively. To be sure, in the coming years, partnerships and maintenance experience will be more important than ever. Hyperscale operators are looking for reliability in delivery and consistency in design and performance. For the hyperscale data center market, the goal is partners, not vendors.

    2020 01/06

  • Where Is The "Data Card" For Big Data Applications?
    I always feel that big data applications are not satisfactory, especially for users in traditional industries. Compared with native Internet companies, companies in traditional industries have been greatly impacted and squeezed. One of the reasons is data-based applications. The Internet can improve and adjust services for registered users based on their access behavior. But for users in traditional industries, they do not have the Internet capabilities of cloud-native applications. In this case, how should users in traditional industries accelerate their pursuit? It is said that traditional industries have a large amount of data. indeed so. Banks have depositor data, civil aviation has user flight records, and telecommunications has mobile phone user data ... These data are the wealth of companies in these industries, but they are also the privacy of users. When it comes to big data, we have not seen more. Business model innovation. Compared with traditional industries and Internet companies, the gap lies in the Internet. One is the Internet + transformation, and the other is the Internet. The gap is obvious. Speaking of the traditional enterprise Internet + transformation, Analysys CPO Zhu Jiang divided it into 4 phases in an interview: the first phase, marketing Internet, similar advertising on the Internet; the second phase, channel Internet , Using the Internet as a channel to sell products; the third stage, the Internet of products, has its own products and services on the Internet, and can support users in digital form; the fourth stage, operating the Internet. Today, more industry users are still in the process of transition from the second to the third stage. Only when the third stage is achieved can big data business innovation really be on track. For the traditional industry, Analysys released the ARGO growth model for smart user operations and the Analysy Ark product suite. Among them, the ARGO model created conversion from Acquisition, Retention, and Growth to create value, and combed the intelligent operation based on OpenTech's big data and open technology applications. Only when you truly have your own Internet-based product platform, can you really build a bridge between enterprises and consumers. This is different from traditional industry companies that open online stores on e-commerce platforms such as Taobao, Tmall, and Jingdong. Or, the user's access data belongs to e-commerce and does not belong to industry companies, so it is impossible to talk about business innovation based on big data. Only with its own Internet-based product platform, big data-based business innovation can open its doors. Only in this way can the ARGO model and the Easy View Ark be used. Simply put, ARGO is a methodology. From reaching users: such as guided registration, event attraction, SMS verification, email verification, third-party account quick login, APP message notification, new user coupons, account information / preferences, novice task reminders, core function reminders ... ; To user behavior analysis, demographics / device information clustering, etc., to channel new capabilities, conversion quality, abnormal traffic investigation, etc., and how to introduce products, how to guide user conversion, etc. Process to provide guidance for smart user operations. Compared with ARGO, Analysys Ark provides software tools required for big data operations, including a series of suites such as intelligent analysis, intelligent operations, user portraits, etc., ranging from buried point solutions, buried point management, and fine user grouping; to multi-channel A series of operations including reach, localized deployment, free design of operation plan, and real-time kanban. These tool platforms are completely based on open platforms and can be seamlessly integrated with open source platforms. In a word, for enterprise users in traditional industries, big data business innovation can be described as everything and only owes Dongfeng! Dongfeng needs users to build their own Internet platform of products as soon as possible, otherwise, their voices are not good, and no matter how good the drama is, they can't get out! Traditional industry users' big data application "card" is stuck here!

    2020 01/04

  • Chongqing Sets up Big Data Industry Talent Alliance
    The Chongqing Big Data Industry Talent Alliance was officially established in Yongchuan District, Chongqing. Government departments, universities and related enterprises will jointly build an integrated system of "government, industry, research, and use" in the big data industry, helping to resolve the big data talent bottleneck in Chongqing. In recent years, Chongqing has continued to focus on [big data intelligence" and is committed to building a trillion-level intelligent industry. Now it has gathered more than 3,000 related enterprises. In 2018, the output value of the intelligent industry reached 464 billion yuan. At the same time, the shortcomings of talents in Chongqing's big data intelligent industry have gradually become apparent, and the talent gap has reached tens of thousands. It is imperative to strengthen collaborative training and precise docking between enterprises and universities. It is reported that the Chongqing Big Data Industry Talent Alliance brings together more than 50 universities such as Chongqing University, Southwest University, and University of Electronic Science and Technology, as well as well-known Internet companies such as Baidu, Ali, Tencent, HKUST Xunfei, Inspur, etc., and aims to integrate universities, parks, and enterprises. All kinds of element resources form a full-chain industrial ecology for the training of big data talents, technology research and development, achievement transformation, promotion and application, and provide talent resources for the development of the big data industry in Chongqing and even the whole country. Specifically, the alliance will accurately and quantitatively dock in the professional setting of universities, discipline construction, technology research and development, curriculum embedding, talent training, and internship training, etc., and develop unified standards for the training of big data talents; Public service platform, forming a big data talent database, realizing precise docking between supply and demand, and providing talent protection for the development of the big data industry.

    2020 01/02

  • Completed the First Data Center Station in Anhui Province
    The possibility of power service security is increased, and speed can be increased when you swipe your phone. On December 30, the reporter learned from Hefei Power Supply Company that the first data center station in Anhui Province, the Zhonghai Data Station in Hefei City, was officially put into operation. The station integrates urban power supply and data service functions into one, and will play a positive role in promoting Hefei's "Ubiquitous Electricity Internet of Things" pilot and smart city construction. Zhonghai Data Station is located near Jindou Road, Binhu New District, Hefei. The site was originally planned to be a 20 kV power switching center, which is responsible for the power supply guarantee of tens of thousands of users in the surroundings, and is one of the important distribution network hub stations in the region. Without changing the original layout and role, Hefei Power Supply Company excavated the open space of the opening and closing center to transform the infrastructure, newly built data servers and network communication equipment, and upgraded it to be the first opening and closing center in Anhui Central Station. The station will provide cabinets, servers, and cloud platforms for communication operators and Internet companies to further improve data aggregation and edge computing capabilities and support the development of urban Internet and big data. "People nearby will shake the vibrato at home and watch videos to further 'speed up' and get a smoother experience," said the head of the Internet Office of Hefei Power Supply Company. According to reports, there are currently nearly a thousand and ten thousand kilovolts of opening and closing stations in the main urban area of Hefei. Using its inherent advantages of large number, wide range, and close to the user's location, Hefei Power Supply Company plans to choose another 10 in Binhu demonstration area in 2020 The Block Opening and Closing Office has been upgraded to a "multi-site integration" project that integrates data centers, 5G base stations and even photovoltaic, charging, energy storage, etc., to comprehensively build a low-cost, high-efficiency, multi-class, new technology energy service complex Provide high-quality basic resources for Hefei to build a smart city. Since May this year, the 23 square kilometers area of Hefei Binhu Core Area was selected as the two demonstration areas of the [Ubiquitous Electricity Internet of Things" in the province. Hefei Power Supply Company accelerated the construction of the "Ubiquitous Electricity Internet of Things" pilot project. By the end of this year, the first 25 pan-network construction tasks have been completed one after another, and active coverage of power outages in the core lakeside area and WeChat visual dispatch have achieved full coverage. At present, the average power outage time for users in the core area of Binhu has dropped from 96 minutes to 51 minutes. Through the "Internet +" smart energy platform built by the ubiquitous network, Hefei Power Supply Company currently provides integrated energy services to 52 companies, reducing costs. The ubiquitous network also provides big data analysis such as urban industrial electricity for relevant government departments, which provides an important decision basis for the future development of the city. From next year, Hefei Power Supply Company plans to expand the pan-network demonstration area to 100 square kilometers of Binhu New Area to build an energy Internet ecosystem.

    2019 12/31

  • Five Essential Skills for the Data Science Job Market in 2020
    According to AI developers, data science is a highly competitive field, and people are rapidly learning more and more skills and experience. This has led to a skyrocketing demand for machine learning engineers, and all data scientists need to become developers. In order to stay competitive, be prepared for new ways of working with new tools! Here are the five skills necessary for the data science job market in 2020. 1.Agile development Agile development is a method of organizing work that has been heavily used by development teams. More and more people are playing the role of data scientist. Their initial skills are pure software development, which has given rise to the role of machine learning engineer. Post-its and agile development seem to go hand in hand More and more data scientists / machine learning engineers are divided into developers: their job is to continually improve machine learning-related content in existing code bases. For these roles, the data scientist must understand how to work agilely based on the Scrum approach. It defines different roles for different people, and this role definition guarantees the smooth implementation of work and continuous improvement. 2.Github Git and Github are software for developers and can be very helpful when managing different versions of software. They track all changes made to the code base, and they really make collaboration easy when multiple developers make changes to the same project at the same time. GitHub is a good choice As the role of a data scientist becomes more important, being able to use these development tools proficiently is also one of the necessary skills. Git is becoming an essential skill when looking for a job, and it takes time to become proficient with Git. When you are alone or your colleague is a novice, it is easy to start researching Git, but when you join a Git expert team and only you are a novice, you may experience more than you think More effort can keep up. Git is a skill you must master 3. Industrialization In the field of data science, the way we think about projects is also changing. What hasn't changed is that data scientists still use machine learning to answer business questions. However, over time, data science projects have increasingly been developed for production systems, such as microservices in large software AWS Cloud Vendor At the same time, the CPU and RAM consumption of advanced models is also increasing, especially when using neural networks and deep learning. As far as the work of data scientists is concerned, not only the accuracy of the model, but also the execution time of the project or other industrial aspects, the latter is becoming increasingly important. 4. Cloud and Big Data In the industrialization of machine learning, the constraints on data scientists are becoming more and more serious. At the same time, it has become a serious constraint on data engineers and even the entire IT industry. A famous cartoon (Source: https://www.cyberciti.biz/humor/dad-what-are-clouds-made-of-in-it/) Where data scientists can work to reduce the time required for a model, it staff can contribute by changing computing services, which are typically obtained in one or both of the following ways: Cloud: Moving computing resources to an external vendor, such as AWS, Microsoft Azure, or Google Cloud, makes it easy to build a machine learning environment that can be quickly and remotely accessed. This requires data scientists to have a basic understanding of cloud capabilities, such as using a remote server instead of their own computer, or using Linux instead of Windows / Mac. PySpark is writing Python code for parallel (big data) systems Big data: It uses Hadoop and Spark, two tools that allow tasks (work nodes) to be processed in parallel on many computers simultaneously. This requires data scientists to implement models in different ways, because the code must allow parallel execution. 5.NLP, neural networks and deep learning Currently, data scientists still consider NLP and image recognition to be just data science expertise, not everyone has to master it. You need to understand deep learning: machine learning based on human brain thinking However, use cases for image classification and NLP are becoming more frequent, even in "regular" businesses. In the current situation, there is no way to adapt to the current technological environment without a basic understanding of such technologies. Even if you don't have a direct application of such a model in your work, practical projects are easy to find. These items allow you to understand the basic steps of image and text items.

    2019 12/30

  • Three Directions of Data Center Technology Development in 2020
    Considering the prudence of data center technology development (such as the confidentiality of critical infrastructure, non-public agreements, etc.), it is impossible to make specific predictions without taking huge risks. But through dialogue and analysis with suppliers and analysts, one can understand some of the development directions of data center technology. The following will focus on the three development directions and trends of data center technology. Research institutions believe that these three major trends will be realized in 2020 and beyond, and are of great significance. First, machine learning and operational data collection provide new possibilities for intelligent data center management tools; second, refocusing on the power density of power and cooling technologies driven by machine learning, and reducing the need to deploy computing infrastructure at edge computers; Third, the enthusiasm for the development of data center technology may one day make diesel generators as a backup power source for data centers a thing of the past. 1. Data-driven data center management For years, large vendors have been discussing the issue of adding predictive analytics to data center management tools (ie, DCIM software). At the same time, smaller companies such as Nlyte and Vigilent are bringing predictive tools to market. Among them, Schneider Electric and Vertiv, two large suppliers, said in December last year that they were collecting sufficient operational data from customer equipment and have begun to launch viable forecasting functions. "We have a very large data pool with billions of rows of data, and we think this is very important and we can start to change the way we provide services and solutions and become more predictable," said Steve Lalla, Executive Vice President of Services at Vertiv And began researching service level agreements (SLAs). " Vendors continuously collect data from customer systems through their monitoring software (on-premise and increasing SaaS). Lalla said that over time, data has become more standardized and organized, making it useful for analytics. Schneider Electric's senior vice president of innovation and CTO Kevin Brown said that the company's commitment to building predictive data center management capabilities and delivering them as software as a service (SaaS) began three years ago. "Now we have enough data in the cloud to start rolling out predictive analytics, more sophisticated battery-aware models and machine learning algorithms are no longer theoretical. These products will be released this quarter," he said. He said Schneider is currently collecting data on 250,000 to 300,000 devices deployed in customer data centers. He said the company hired a dedicated team of data scientists, and when it had about 200,000 devices, the team began to feel confident about the accuracy of some of their algorithms. For example, have enough confidence to do things like predict when a UPS power battery might fail. Schneider wants to collect more data to do this. He explained, "The more powerful the algorithm, the more data it needs. The standard will continue to increase, depending on how sophisticated the user wants the algorithm." Andy Lawrence, research executive director of the data center industry authority Certification Institute Uptime Institute, said in a recent webinar that the advent of machine learning has driven the recovery of data center management software. The development of the DCIM software market was full of hope at one time, but it did not show the rapid growth that many people expected. Despite the slow progress, it has been recognized by users. According to Rhonda Ascierto, vice president of research at Uptime Institute, DCIM can now be considered a mainstream technology. All data centers have some kind of DCIM, whether it's called DCIM or some other name. Most importantly, enough data center management software has been deployed to collect data that can now be used to build machine learning-driven predictive analytics and automation capabilities. The rapid development of data availability and machine learning technologies are driving the development of data center management software. But there is a third driver: edge computing. When users plan to deploy many small compute nodes near where the data is generated, they quickly run into the issue of operating a distributed infrastructure in an economical manner. Tools like DCIM, especially as provided by cloud computing services (such as SaaS), are natural, and remote monitoring and management functions can be implemented through a centralized console. Steven Carlini, vice president of innovation and data centers at Schneider Electric, said, "Edge computing has become the core of Schneider Electric's infrastructure management SaaS strategy. The idea of entering a data center with a cloud-based management system is that in many cases data needs to be saved At the scene, we have solved this problem. It is indeed more valuable when deployed on a large scale. The real value will be on the edge. " 2. Edge computing is smaller, faster, and ubiquitous Edge computing is putting increasing pressure on engineers designing data center technology that need to make data centers smaller and more dense. For example, Schneider Electric recently released the smallest micro data center to date: a 6U cabinet that can house servers, network equipment and UPS power, and can be wall-mounted. Brown said he expects this miniature data center product to generate significant revenue for Schneider in 2020. Vertiv updated its power supply portfolio in 2019 and introduced a series of higher power density UPS power supplies. Quirk said that in all the company's products, the rack-mounted GXT5 series UPS power supply has been designed with full consideration of the needs of edge computing. Its power range is from 500VA to 10kVA (some models support 208V voltage, and some models support 208V and 120V. Voltage). Edge computing is also an important consideration after Schneider and Iceotope, an immersion cooling technology company, and Avnet, an electronics distributor and IT integrator, announced this October. Iceotope's cooling method is not to immerse the server in liquid coolant or install cooling pipes on the motherboard to send frozen water directly to the chip, but to inject the coolant into the sealed server case. This means the solution can be deployed in standard data center racks, and standard servers can be water cooled. The first problem solved by immersion cooling technology is high power density. The growth of machine learning is driving the adoption of GPU servers that are used to train deep learning models. The power density of these power-hungry GPU chips far exceeds what a standard data center design can achieve. Many users can still use air cooling technology, and the liquid-cooled rear door heat exchanger can directly cool the air on the rack, which is the most popular method to solve this problem. Proponents of immersion cooling technology, however, emphasize its efficiency advantages. These solutions do not require fans and can save power. "Using liquid cooling in many environments can reduce energy consumption by at least 15%," Brown said. In addition, edge computing solves many problems. Removing other related components, such as fans, means fewer failed components. Providing higher power density in a smaller space makes it easier to deploy edge computing facilities where there is not much space. It also addresses the issue of dust, which can damage IT equipment. Analyst Ascierto said that although vendors are excited about edge computing, a survey by Uptime Institute shows that there is still no significant demand for edge computing power. To date, most requirements for micro data centers with power levels of 100 kW or less have been driven by server rooms or remote locations where computing power already exists. Ascierto said that demand for edge computing is not expected to surge in 2020. Once more IoT devices and 5G wireless infrastructure are deployed, a huge wave of demand is expected after 2020. 3. The promise of better backup power Another major shift in data center design is only just beginning, and may not happen until 2020, when batteries or other technologies replace diesel generators. As Lawrence points out, diesel generators will also be a problem for data centers, with high deployment and maintenance costs and noise and air pollution. However, so far, they have become an integral part of data centers, which are usually operating around the clock. Data center operators have been exploring two alternatives for diesel generators: fuel cells and batteries, of which lithium-ion batteries are a particularly promising technology. Bloom Energy currently deploys fuel cells in multiple data centers. One of the eBay data centers in Utah uses BloomEnergy's fuel cells as a backup power source instead of diesel generators. Lawrence said that several pilot projects of Bloom Energy are being deployed from 2019 to replace diesel generators. In addition, one or two major hosting service providers have studied this. As the electric vehicle industry has made great strides in increasing energy density and reducing the cost of lithium-ion batteries, lithium-ion batteries are quickly gaining a place in the data center industry. At present, it has been used to replace lead-acid batteries in UPS power supply systems, but the operating time it provides is continuously increasing. Schneider's Brown said that lithium-ion batteries could eventually replace diesel generators. "I don't think this transition will happen in 2020, but we will closely track it," he said. He said that Schneider Electric's key indicator of concern is the uptime of lithium-ion battery systems and reducing their deployment costs. Two and a half years ago, the operating time of the lithium-ion battery system was 90 minutes, and now it is close to 3 hours. None of these trends will begin in 2019, and they will not reach a decisive inflection point in 2020. These are some of the major developments achieved in 2019. It is expected to accelerate further in 2020 and will promote the development of some data center technologies (such as chips, networking, virtualization, and containers) in the coming years.

    2019 12/26

  • Lenovo Liu Miao: Development Trends and Strategies of a New Generation of Intelligent Cloud Data Centers
    On December 18, 2019, the 14th China IDC Industry Annual Ceremony Main Forum was officially held at the Beijing National Convention Center. As a well-known event in the data center cloud computing industry, as well as an efficient communication platform for upstream and downstream industries such as IDC companies, telecommunications operators, the Internet, finance, government, and manufacturers, the guests included thousands of government leaders, industry experts and corporate representatives. Liu Miao, General Manager of Intelligent Cloud Services of Lenovo Enterprise Technology Group, gave a wonderful speech entitled "Development Trends and Strategies for a New Generation of Intelligent Cloud Data Centers" at the meeting. Starting from Lenovo's own business practices, Liu Miao analyzed in detail the industry's future demand for intelligent cloud data centers, very large data centers, and edge data centers, and emphasized the important value of service-oriented among them. Liu Miao introduced Gartner's top ten strategic technology trend data in 2020 in his speech. For the first time, Gartner divided the top ten trends into two major categories, namely, people-centered and smart space. In these two categories, the super-automation, multiple experiences and civilianization in the human-centric category, the edge empowerment in the smart space and the use of the blockchain have come together to become industry intelligence. For example, new financial outlets served by RPA intelligent robots, regional intelligent chronic disease management based on edge clouds, distributed Internet cafes, e-sports gaming venues, and new energy integrated service networks that support autonomous driving, etc. The formation of a variety of industry intelligent applications in the industry is also Lenovo's prediction of technological trends. Liu Miao also emphasized that the future is service. For example, traditional server vendors will increasingly provide services related to devices in the future, and will provide customers with services in a SaaS, digital, and visual manner. The traditional manufacturers selling tires, changing tires or installing sensors for tires will also evolve into users who do not need to buy tires but pay for them by the kilometer. Speaking of the theme IDC of this conference, Liu Miao's point is: Cloud data centers are still growing rapidly, and the number of Internet users in China is several times that of the United States, but the size of US data centers is several times that of China. From this perspective, The long-term development of China's future super-large data centers, including edge data centers, is favored by the industry. Talking about the development trend of future data centers, Liu Miao concluded that, first, the data center is modular and high-speed evolution, and the best model of micro-module, container-type, and full modular configuration. Second, based on the decoupling + tight coupling of user application architecture, future-oriented operational efficiency and capacity. Third, with the rise of edge scenes, while the computing power is converging to very large-scale data, the proportion of edge computing power will grow rapidly. Some data show that in the future, 50% of IoT networks will face network bandwidth limitations, and 40% of data will need to be analyzed, processed, and stored at the network edge. Therefore, the commercialization of 5G will detonate edge computing and edge cloud data centers. In response to this trend, Lenovo Enterprise Technology Group's smart cloud service strategy: full stack coverage and three-stage rockets fit well with user needs. In simple terms, from servers, storage and networks at the hardware layer, to smart data centers, to hybrid cloud and business applications, Lenovo uses smart data centers (tier 1 rockets), smart cloud world (tier 2 rockets) and smart industries Applications (three-stage rockets) can comprehensively assist companies in their digital transformation. Specifically, Lenovo's intelligent cloud service model can first help customers plan the construction of IDC / EDC computer rooms from the perspective of industry experts. Secondly, Lenovo has the industry's best computer room construction team to help customers fully customize IDC construction. The platform's business contracting model can provide customers with a complete quality control system and professional delivery team. Finally, a complete integrated solution, from the basic IDC construction to the cloud service, can fully meet the needs of the customer's overall solution. Speaking of the industry focus of Lenovo Smart Cloud in 2020, Liu Miao revealed that it will work with partners to gradually develop industry intelligence in a customized, service-oriented manner around the eight major industry ecosystems of traditional education, medical care, and electricity. Make efforts in some new fields and industries.

    2019 12/25

  • Yangtze River Big Data Center Officially Launched
    On October 21, the Yangtze River Survey Planning, Design and Research Institute announced that the Yangtze River Big Data Center, the first water conservancy big data center in China's survey and design industry, was completed in Han. The Yangtze River Big Data Center has now collected more than ten professional business data covering basic geological and topographical survey data, planning, hubs, construction, mechanical and electrical, immigration, ecology, environment, and construction, as well as hydrometeorology, agriculture, forestry, and livestock Industry and other Internet data. Based on the pre-sorting of the existing data, 28 data models and related data standards were designed to form a data asset catalog for the Yangtze River Big Data Center. According to Professor Huang Yan, chief scientist of the Yangtze River Big Data Center and deputy chief engineer of the Yangtze River Design Institute, it has initially achieved unified data services on the basis of the Yangtze River Big Data Center platform, and established flood protection security, water resource optimization, Lake health maintenance, soil erosion control, ecological environment protection and many other applications. She said that the Yangtze River Big Data Center will provide data application services for the technical development of water conservancy industry survey planning and design, and provide support for applications such as project life-cycle management, flood and drought disaster prevention, water environment governance, and water ecological restoration. Serving the public to provide more popular information resources.

    2019 12/24

  • 2020 Forecast: Paving The Way for "the Next Decade of the Data Age"
    2020 will be a new beginning. The "Next Decade of the Data Era" proposed by Dell Technology Group will officially kick off. We are undoubtedly entering this era with new and high expectations. We look forward to the technology that can bring new life, work and entertainment to people. Change. So, what new breakthroughs and technological trends will set the tone for the development of science and technology in the next decade? Dell Technology Group has made several major predictions for the coming year. Prediction 1: IT infrastructure stays simple We have a large amount of data in our hands-big data, metadata, structured and unstructured data. They exist in the cloud, edge devices, core data centers ... it can be said to be everywhere. But it is difficult for companies to ensure that the right data is sent to the right place at the right time. Because there are too many systems and services intertwined in the entire IT infrastructure, they lack "data visibility", which is the ability of IT teams to quickly access and analyze applicable data. With the advent of 2020, CIOs will definitely make data visibility a top priority for IT, because in the final analysis, data is the source of energy that drives the innovation engine. Companies will work to simplify and automate their IT infrastructure and integrate systems and services into holistic solutions that enhance control and transparency, thereby accelerating digital transformation. The consistency of architecture, orchestration, and service agreements will open new doors for data management and ultimately make data a part of artificial intelligence (AI) and machine learning (ML) that can drive IT automation. All of these will help companies achieve better business results more quickly, and the next decade of innovation will sprout from this foundation. Prediction 2: Hybrid cloud will become a development trend Public and private clouds can and will coexist-this idea will become a real reality in 2020. The multi-cloud IT strategy supported by the hybrid cloud architecture will play an important role, while ensuring that data accessibility and security are not affected, while enabling enterprises to obtain better data management and visibility. In fact, IDC predicts that by 2021, more than 90% of global enterprises will combine local / private private clouds, several public clouds, and legacy platforms to meet their infrastructure needs. However, private clouds do not simply exist in the "heart" of the data center. As 5G and edge deployments continue to roll out, private hybrid clouds will emerge at the edge, ensuring real-time data management and visibility wherever they are. This means that enterprises will definitely expect more cloud and service providers to effectively support their hybrid cloud needs in various environments. In addition, security and data protection will be deeply integrated and become part of the hybrid cloud environment, especially containers and Kubernetes will be further used for application development in such environments. Adding security measures to the cloud infrastructure will not help-it must be integrated into the overall data management strategic framework from the edge to the core data center to the cloud. Prediction 3: Software definition and cloud will become the direction of IT transformation One of the biggest obstacles for IT decision makers to drive transformation is resources. Even if only planning and forecasting calculations and consumption needs for the coming year, capital expenditures and operating expenditures often become limiting factors, let alone planning and forecasting demand for the next 3-5 years. Software-as-a-Service (SaaS) and cloud consumption models have been further popularized, which allows enterprises to purchase flexibly and pay as they go. In 2020, as companies seize opportunities to transition to software-defined and cloud-enabled IT, flexible consumption and "as-a-service" options will accelerate their adoption. As a result, companies will be able to choose the right economic model for their business, use end-to-end IT solutions to achieve data mobility and visibility, and even handle the most intensive artificial intelligence and machine learning workloads when necessary. Prediction 4: "Edge" Computing Expansions Rapidly Expand into Enterprises The "periphery" continues to develop-many people are struggling to define exactly what it is and where it exists. Once the scope is limited to the Internet of Things (IoT), it is difficult to find disconnected systems, applications and services, even people and places. Therefore, it can be said that the edge appears in many places and will continue to expand under the guidance of the enterprise to provide the IT infrastructure required by the enterprise. 5G connectivity has created many new examples and possibilities for healthcare, financial services, education and industrial manufacturing. As a result, SD-WAN and software-defined networking solutions become the "core thread" of the overall IT infrastructure solution-ensuring that a large amount of data workloads can be transmitted at high speed and securely across the edge, core data center, and cloud environments. Flexibility and agility are the only ways to effectively manage and protect data over the long term, and only open software-defined networks can provide such a need. As companies realize this, open network solutions will become a better choice than privatization solutions. Prediction 5: Smart devices will change the way people work and collaborate Innovations in the field of personal computers have made new breakthroughs every year-the screen has become larger and more immersive, while the overall shape is smaller and thinner. But a more transformative breakthrough comes from the core components of personal computers. Systems created using software applications of artificial intelligence and machine learning can now determine when and where power and computing should be optimized based on user usage patterns. With biometrics, the computer knows that the person across from you is the moment you stare at the screen. Today's artificial intelligence and machine learning applications are smart enough to allow the system to adjust the computer's sound and color based on what you watch or games you are playing. Over the next year, these advances in artificial intelligence and machine learning will turn personal computers into our smarter, more collaborative partners. The computer will be able to optimize power and battery life when we are most productive, and even become a "self-sufficient" machine, which can automatically optimize performance and automatically request maintenance, thereby reducing user burden and reducing IT events that need to be reported Quantity. For both end users and the IT team supporting them, this will bring a huge increase in satisfaction and productivity. Prediction 6: Sustainable innovation and development are more important For companies like Dell Technology Group, we want to ensure that the impact on the world does not pose a danger to the planet, so sustainable innovation has always been at the core. We will increase investment in recycling for closed-loop innovation-hardware will become smaller, more efficient, and made from recycled and recycled materials-to minimize e-waste and maximize the use of existing materials. Dell Technology Group has achieved the [2020 Well-Being Inheritance" goal in advance, so we set a new goal of 2030-for every product purchased by our customers, we will recycle a comparable product; more than half of it will be produced from recycled or renewable materials Products to lead the development of circular economy; at the same time, 100% of all product packaging uses recycled materials or renewable materials. As I enter the "next decade of the data era", I am full of optimism and hope. I believe our customers will make full use of their data to bring about new breakthroughs in technology that we can truly appreciate-such as more powerful equipment, faster diagnosis and treatment, more popular education, less waste and fresher air ...... The achievements of the next decade will come inadvertently.

    2019 12/23

  • Established Suizhou Big Data Center
    On December 19th, the Suizhou Big Data Center was officially established, marking the start of a new journey for Suizhou's smart city and digital government construction. Standing committee member of the municipal party committee and executive deputy mayor Lin Changlun attended the listing ceremony. It is reported that the Municipal Big Data Center is a public institution directly under the municipal government. Its main duties are to implement national and provincial laws, regulations and guidelines and policies related to informatization work, coordinate the city's informatization construction, and guide the city's economic and social data development and utilization in various fields. Work, promote the development of the city's big data industry, take charge of the construction of the city's data security guarantee system, implement cross-level, cross-department, cross-system, cross-business data sharing and exchange, and build a city-wide unified data center. The establishment of the city's big data center is a major decision to deepen the reform of the Party Central Committee, the State Council, the Provincial Party Committee, and the Provincial Government in Suizhou City. Comprehensive shared concrete actions will provide strong technical support and service guarantee for scientific government decision-making, precise social governance, and efficient public services.

    2019 12/21

  • Hybrid Multi-cloud Era,How IBM Mainframes Continue to Create
    The mainframe is a big sign of IBM, a "hundred-year-old store" in the technology industry. Since its birth, it has always represented the industry's top level in computer systems. This product, which dates back to 55 years ago, has incredible vitality. With the technological innovation that keeps pace with the times, it still efficiently supports the operation of core business applications in many enterprises. Today, two-thirds of Fortune 100 companies use IBM Z mainframes, 44 of the world's top 50 banks, and 90% of major airlines operate their businesses on IBM Z mainframes. Into the mixed multi-cloud era, based on the accurate grasp of industry trends, the IBM mainframe is reinventing itself. Taking the new generation of IBM z15 released a few months ago as an example, with data privacy passports, instant recovery and other industry-first black technologies, z15 can become an important help for enterprises to implement a hybrid cloud strategy and help users fully enjoy the value of cloud. Recently, the author had the honor to visit IBM China System Center. Under the introduction of Mr. Xie Dong, Vice President of IBM, Chief Technology Officer of IBM Greater China, and General Manager of IBM China System Technology Center, I deeply understood the past and present of IBM mainframe, In the hybrid multi-cloud era, the unique value of the mainframe is reflected. Beginning in 1964, from the moon landing to bank transactions, there are mainframes In his speech, Mr. Xie Dong summarized the development process of IBM mainframe into three phases: System phase, zSeries and zEnterprise phase. "System" was the original name for the IBM mainframe. In 1964, IBM released the first mainframe, the IBM System / 360, which cost $ 5 billion to develop. It can be said that the emergence of System / 360 has epoch-making significance. With many advanced technological innovations, it can meet almost all computing needs at that time, and has become a key to open the door of the information age. Including assisting NASA in the calculation of the path to the moon, supporting the operation of the bank transaction system and the online ticketing system of the airline, the host machine has demonstrated unparalleled value. From System / 360 (1964) to System / 370 (1970) and System / 390 (1990), they are all star products of IBM mainframes in different eras. In 2000, entering the new century, the IBM mainframe also ushered in its own new era-IBM System changed its name to Z, and entered the Z era. Regarding the reason for choosing this name, according to Xie Dong, "Z is Zero Zero Downtime. When we designed this system, the positioning was very clear from the name-we must make it zero downtime. Machine. Because this system is the supporting environment for some core businesses, the goal after going online is zero downtime, and we hope to allow users to continue using it. "Including z900 (2000) to z9 (2005), z10 (2008) Are all products in the zSeries stage. From the aspect of reliability and quality, it is guaranteed that there will be no failures, and system, software upgrades, and new application deployments will not be down. Then comes the zEnterprise stage. The relatively well-known z12 (2012), z13 (2015), z14 (2017), and the latest z15 (2019) are all born at this stage. In the upgrade iterations, the IBM mainframe has more and more new features to ensure the critical applications of enterprises and organizations. And this year, with the completion of the acquisition of Red Hat by IBM, IBM's mainframe has also made new progress in the use of open source technology. For the "long" evolution of IBM mainframes, a report released by IDC in September of this year summed up the essentials: from siloed to connected, to transformative. IDC refers to the IBM mainframe as a "mainframe capable of transformation", that is, the IBM mainframe can fully participate in the transformation and upgrading of enterprises and empower digital transformation. Hybrid multi-cloud era,How IBM mainframes continue to create According to Forrester's survey report, 78% of Asia-Pacific enterprises have formulated corresponding strategies for mixed multi-cloud, and this proportion is 74% globally. Xie Dong also mentioned in a signed blog post published by him: "In the second chapter of the transition to the cloud, the critical load of enterprises from the supply chain to the core banking system will be migrated and optimized to the cloud. IBM z mainframe on display at IBM China System Center Key applications on the cloud will place higher requirements on the security, reliability, and agility of the IT infrastructure, and the latest IBM z15 may be the best choice to solve these problems and embrace hybrid multi-cloud-given the z15 Outstanding innovative features: Encryption anytime, anywhere--Based on the existing universal encryption technology, IBM has also introduced a new Data Privacy Passports technology to help users achieve control over data storage and sharing. Not only can it protect local data at the infrastructure level, but it also allows the setting of data usage rules to uniformly manage data access of individual users across private, public and hybrid clouds, thereby improving the company's data privacy protection capabilities. · Cloud-native application agile development--IBM and Red Hat have announced plans to support the Red Hat OpenShift platform on the IBM Z. The goal is to leverage the scalability and security of IBM Z and the flexibility to run, build, manage, and modernize cloud-native application workloads on selected architectures. Bring cloud-native application development to IBM Z to help users modernize existing applications, build new cloud-native applications, and securely integrate the most important workloads across clouds to gain greater competitive advantage · New instant recovery function--It can reduce the cost and impact of planned and unplanned downtime, and meet rapidly changing market demands in a timely manner. Whether encountering planned or unplanned outages, the instant recovery feature can help customers release the full capabilities of the z15. By enabling the built-in kernel to recover service level agreements (SLAs) before the outage, the fastest speed is 2.5 times as fast as Processing transactions that could not be processed during downtime. Looking back on the entire development process of mainframes, under different era backgrounds, IBM gave different meanings to mainframes. But in general, the development of the mainframe has always originated from IBM's profound insight and grasp of user needs. Based on this, with advanced technological strength, IBM continues to innovate, injecting a steady stream of vitality into the mainframe.

    2019 12/20

  • Big Data Sets Impressive New Standards for Integrated Business Systems
    Big data is changing the landscape of integrated business systems by developing impressive new standards. Industry media Wired published an article in 2013 describing the role of big data in the field of integrated business systems. Wikibon's chief artificial intelligence and data analyst James Kobielus said that integrated business systems can tap the potential of artificial intelligence and big data in multiple ways. Kobielus noted that every industry is looking for ways to use big data to improve its competitive advantage. Integrated business systems provide a centralized repository where those benefits can be implemented The importance of big data in integrated business systems in the 21st century If an enterprise uses multiple systems to manage different processes, such as payroll, employee management, customer support, and inventory management, then it is best to consider choosing innovative solutions that integrate the enterprise's business systems. However, isolated software systems may face imminent challenges that may ultimately affect business growth. Big data makes it easier to use business integration systems to handle these processes. They provide much less fragmentation methods, which can benefit organizations in many ways. One of the benefits of using AI to manage integrated business systems is that it provides real-time data availability. Companies may not enjoy real-time visibility into the overall performance of the brand. Moreover, people may also find that collecting data for different systems has become a very time-consuming process, which will actually divert the attention of the enterprise from important business decisions. If its business systems are not integrated, there are a few key advantages to consider. These new systems make data collection more efficient. They also provide many other benefits, such as using big data to identify differences in data, while cross-referencing documents from different departments. Increase employee productivity Big data plays a very important role in increasing employee productivity. Ryan Ayers, CEO of Ryan Ayers Communications, discussed these benefits on Experify. He said that UPS, a global courier company, spends $ 1 billion a year on big data tools to improve employee productivity. These benefits can be used in integrated business systems. When choosing an innovative integrated software solution for the business (such as QuickBooks time tracking), the company will be able to use its solution to increase employee productivity. Timelines that use sophisticated artificial intelligence algorithms to integrate with accounting software will ensure that companies can manage employee tracking and payroll processes more efficiently. The two systems will be interrelated, rather than operating as separate systems. Increasing employee productivity is often a top concern for corporate managers, as this aspect is critical to brand growth and business success. To reduce time, employees must spend time enduring repetitive tasks, such as those related to traditional timetables, which ultimately affect productivity and employee motivation. Improve business performance Using a stand-alone system makes the task of collecting accurate data in real time impossible. With integrated systems, companies will be able to collect the most accurate data in real time and make important decisions for their business. Businesses will gain real-time digital visibility, which will ultimately help business growth, because businesses will have useful and accurate insights into different processes and departments, and there will no longer be a need for the business or its employees to waste precious time trying to get from different systems Extraction accuracy because this work will be significantly simplified. Reduce the risk of errors Although it is well known, no company is willing to use traditional methods in systems such as payroll and employee monitoring schedules, as this is likely to lead to a large number of hard-to-find errors. Fortunately, leveraging big data to take advantage of innovative software solutions will definitely reduce the number of errors in your business. However, integrated systems will further reduce errors to provide ultimate accuracy, which is undoubtedly beneficial to businesses of any size and industry. Time-consuming tasks (such as data recapture, which often lead to human error) will no longer be a task for corporate employees. save time Corporate IT departments can save a lot of time because maintaining multiple independent systems is time consuming. In addition, choosing to integrate an enterprise's system will reduce the time required to maintain the system because they don't have to worry about monitoring various applications and determining which ones need updating and which ones don't. As a result, corporate IT departments will be able to focus on more important issues, such as optimizing business processes with innovative ideas. Enterprise IT departments will also be able to focus on maintaining business security infrastructure, which is critical to business growth and successful operations. cut costs One might find that an integrated software solution is much more cost effective than multiple independent systems. This means that businesses will be able to reduce expenses and enjoy improved cash flow. Startups and smaller companies are always looking for innovative ways to cut costs and invest where it matters. Reducing the cost of business software systems will bring undeniable benefits to the business of the enterprise, will ultimately help to succeed, and greatly promote business growth. Choose innovative solutions to drive business Because technology is developing rapidly, there are good reasons for companies to take advantage of innovative solutions. Businesses should always look for improved solutions, rather than using traditional software and business process methods that can stagnate their business. Companies should encourage their employees to provide innovative solutions to drive business development, and to this end, companies should be prepared to adapt to the age of technology. Integrating business systems should be the first step towards success for their business. Because these solutions will help reduce costs, save time, and increase productivity, innovation should be a major concern for companies in every industry. Big data makes integrated business systems more reliable In managing business processes, artificial intelligence and big data have many huge benefits. Integrated business systems use cutting-edge big data technologies to take advantage of these benefits.

    2019 12/19

  • Speed up the Deep Integration of Big Data and the Real Economy
    The rapid development of information technology represented by big data, the Internet of Things, cloud computing, etc., has led a new round of technological revolution and industrial change, and is increasingly changing people's production and lifestyle, economic operating mechanisms and social governance models. Big data is both a big opportunity and a big dividend. The Fifth Plenary Session of the Eighteenth Central Committee of the Communist Party of China proposed to implement a national big data strategy, marking the development of big data as a national strategy. The 19th National Congress of the Communist Party of China clearly stated that it is necessary to promote the deep integration of the Internet, big data, artificial intelligence and the real economy. Building a modern economic system is inseparable from the development and application of big data. Accelerating the deep integration of big data and the real economy will inject strong endogenous power into the quality change, efficiency change, and dynamic change of economic development, and help China's economy move from high-speed growth to high Quality development. The level of integration and development continues to rise, and bottlenecks cannot be ignored. At present, there are five major bottlenecks in the process of promoting the deep integration of big data and the real economy. First, the depth of integration of big data and industry needs to be strengthened. The integration of big data and R & D design, production management, key equipment and other links is still difficult, and the pressure for industrial transformation and upgrading is greater. Second, the ability of the integration of big data and agriculture to promote agricultural development needs urgent breakthroughs, and the cost of data collection and preservation is high. Most agricultural enterprises have just started in terms of information resource construction and information technology application. Third, the ability to integrate and innovate big data and the service industry needs to be improved urgently. There are not many companies that conduct precise marketing online and in real time in both directions. There are not many companies that integrate big data with the service industry. Fourth, policy support and service support for the integrated development of big data and the real economy are insufficient. Fifth, the construction of information infrastructure is lagging, and the information infrastructure in the eastern and western regions, cities and rural areas is very different. In the information age, data has become a new type of production factor after labor, land, capital, technology and other production factors, and it has increasingly become a new source of power for economic and social development. In recent years, China's big data industry has shown rapid development, and the pace of integration and development of big data and the real economy has also accelerated. Information technology represented by big data has penetrated deeply at various levels such as enterprises, industries and regions, and has produced far-reaching results. Impact. First, the deep integration of big data and industry promotes the continuous improvement of industrial quality and efficiency, leads the industrial industry to improve quality and efficiency, and leads the industry to the integration and upgrade of intelligent production, networked collaboration, personalized customization, and service-oriented extension; second, big data Deep integration with agriculture promotes continuous optimization of the level of industrial production management, supports the continuous improvement of the economic benefits of the agricultural industry, and leads agriculture to continue to integrate and upgrade the precision of production management, the full traceability of quality, and the networked marketing and sales; Convergence promotes the emergence of new business models and new models, promotes the transformation and upgrading of the service industry, and leads the service industry to continue to platform, smart, and shared integration and upgrade. The level of integration and development of big data and the real economy is constantly improving. It is necessary to take [big data + the real economy" as the starting point, strengthen the top-level design, improve the policy and service system for integrated development, consolidate the foundation for integrated development, and stimulate the integrated development of real enterprises and big data Motivation to comprehensively promote the deep integration of big data and the primary, secondary and tertiary industries.

    2019 12/18

  • 5 Types of Cloud Computing Security Foundations and Best Practices
    Businesses moving to the cloud must take on new responsibilities, develop new skills, and implement new processes. The first step to improving the security of cloud computing is to assume that there is no security. Cloud computing has changed the way companies work and will continue to disrupt traditional business models. According to research firm IDC, public cloud spending will more than double by 2023, from $ 229 billion this year to nearly $ 500 billion. It's no secret that companies moving their businesses to the cloud can significantly reduce costs and increase efficiency. Users can launch cloud computing instances in minutes, and can expand or reduce computing resources as needed, while paying only for the products and resources they use, while avoiding high upfront hardware costs and maintenance costs. Opportunities and risks will multiply But don't forget. Businesses are storing their data on third-party servers, which, although under control, are still owned by third parties. Even if the cloud computing service provider's environment is highly secure, the content (applications and data) in the cloud remains the responsibility of the enterprise itself. Many companies have put cloud computing security on the board's agenda because its impact can have a serious impact on corporate reputation and shareholder value. Enterprise data moved to the cloud platform beyond traditional boundaries, resulting in an expanded attack surface. As more and more sensitive information is stored in the cloud platform, the resources in the cloud platform become more and more the target of cyber criminals. Prepare for new threats As businesses move to the cloud, they will have to assume new responsibilities, develop and adjust processes to deal with many unknown threats. The secret to improving cloud computing security is to assume that there is no security at all when assessing the overall security posture. Public cloud security involves multiple elements, so it's difficult to figure out where to start. If your business is already on a cloud platform or is planning to migrate to a cloud platform, there are five best practices you can follow to protect its public cloud adoption. 1. Know your responsibilities The security of cloud computing is based on a shared responsibility model. Cloud computing providers have a responsibility to protect the physical network and secure the infrastructure, and businesses have a responsibility to protect their data, applications, and content, including elements such as user access and identity. Keep in mind that businesses need to be responsible for managing and protecting everything placed on the cloud. 2.Integrate compliance Compliance is one of the main drivers of the demand for next-generation cloud computing security services. The only way to ensure compliance with new and forthcoming regulations is to integrate regulatory compliance into the day-to-day operations of the enterprise, taking into account real-time snapshots of the network topology and real-time alerts on policy changes. From the auditor's standpoint, consider all the items they require when reviewing the network and actively incorporate these reports into their daily work. 3.Automated defense Automation is a key component of cloud computing security. Security audits, controls, patches, and configuration management, all of which can be automated and can help reduce risk. As long as the right tools and processes are in place, automation can significantly reduce the risk of human error, which is critical for managing change on a large scale and preventing security breaches. A secure, automated cloud platform can help monitor the network in real time and provide businesses with the ability to respond quickly to threats. 4. Protect the environment as soon as possible For organizations, strict security controls must be maintained even in a development and quality assurance (QA) environment. By embedding appropriate controls in application development, early adopters are introducing security early in the life cycle. The new security approach promotes the concept of security by design, checking source code for vulnerabilities even during development. Regardless of the security measures the company takes, it needs to be ensured that a similar approach is adopted in its internal environment. 5.Implement on-premises learning Although cloud computing is a major change in technology and may look like a completely different environment, the basic principles of security remain the same. As with traditional on-premise networks, it is also important to adopt the same approach on cloud platforms. For enterprises, using firewalls, servers, and endpoint protection solutions to protect the network, servers and endpoints are critical. These solutions can monitor corporate traffic, prevent unauthorized access, and protect corporate data assets on the cloud platform from destruction, infection, or data loss. Endpoint and email security keeps corporate devices current while preventing unauthorized access to cloud computing accounts. As businesses migrate to the public cloud, they must maintain their own on-premise experience.

    2019 12/17

  • 10 Predictions for the Development of the Data Center Industry in 2020
    With the development of the data center industry and technology in 2020, enterprises need to improve the balance between on-premises data centers and cloud computing resources, adopt artificial intelligence technology on servers, and work hard to effectively manage data spread. Industry media usually make forecasts for the coming year. People will see the arrival of some things: the rise of cloud computing, the development of SSD hard drives, and other issues, such as the repatriation of businesses from cloud platforms to on-premises data centers. And experts' predictions of the data center industry may occasionally bring some surprises.Therefore, 10 predictions for the development of the data center industry are proposed for the coming year. 1.IoT boosts data center growth in urban areas Because this has already happened, it is not a difficult prediction. For a long time, data centers have been built away from renewable energy sources (usually hydroelectric power), but this demand will prompt more data center construction in urban areas. The Internet of Things will be a driving factor, but more and more data center providers (such as Equinix and DRT) will act as network interconnection providers. 2. The rise of network accelerators The use of big data and various types of artificial intelligence means that a large amount of data will be generated and processed, and not all data can be generated and processed in one place. In addition, a network flow controller is currently required, which frees the CPU from the main task of processing data. Therefore, more and more network accelerators (such as Mellanox's ConnectX series) will enter the market, so that the CPU can complete the data processing work, and the accelerator can process a large amount of data faster 3.NVMe over fabrics will grow Non-volatile memory Express (NVMe) is a storage interface similar to Serial Advanced Technology Attachment (SATA). The disadvantage of SATA devices is that their data is stored in HDD hard disks, so the speed and parallelism of SSD hard disks cannot be fully utilized. But there was a problem with early enterprise SSD hard drives: they could only communicate with the physical server they were on. And servers need storage arrays, which means network hops and latency. NVMe over fabric (NVMeoF) is an important advance. It enables an SSD hard drive in one server to communicate with another hard drive elsewhere on the network over the network. This direct communication is critical to improving data movement in enterprise computing and digital transformation. 4. Cheaper storage-grade memory Storage-type memory is the memory that is inserted into the DRAM slot and can work like DRAM memory, but can also work like SSD hard disk. It has a speed close to DRAM memory, but also has storage capabilities, effectively turning it into a cache for SSD hard drives. Intel Corp. and Micron Technology Corp. are co-developing storage-grade memory (SCM) storage products, but the two companies are no longer cooperating. Intel introduced its storage-level memory (SCM) product Optane in May this year, and Micron introduced QuantX to the market in October this year. South Korean memory giant SK Hynix is also developing a storage-grade memory (SCM) product that is different from the 3D XPoint technology used by Micron and Intel. All of this should advance storage technology and hopefully reduce prices. A 512GB Optane memory stick is now priced at $ 8,000. Xeon is even more expensive, so assembling a complete server becomes very expensive. Advances in technology and competition should reduce the price of storage products, which will make this type of memory more attractive to businesses. 5. Artificial intelligence automation for servers All server vendors have added artificial intelligence to their server systems, but Oracle does lead in its autonomy, from hardware to operating systems, applications, and middleware stacks. Hewlett-Packard, Dell and Lenovo will also continue to make progress, but very large server vendors like Supermicro will fall behind because they only have hardware stacks and do nothing in the operating system field. They will also lag behind in storage, as this is an area where the three major server vendors excel. Oracle may not be the top five server vendors, but no one can ignore their contribution in the field of automation. Expect other brand suppliers to continuously improve the level of automation. 6. Slow cloud migration Remember when many companies wanted to close their data centers and move to cloud computing? The idea was very important at the time. IDC's latest CloudPulse survey shows that 85% of enterprises plan to shift workloads from public to private environments next year. A recent Nutanix survey found that 73% of respondents reported that they are moving some applications from the public cloud to on-premises. Safety is considered the main reason. And, because security is questionable enough for some companies and some data, as people become more and more discerning about what they store in the cloud and what remains behind the firewall, cloud migration may Slowed down a bit. 7. Data Expansion Part 1 An IDC survey indicates that most of the data is not where it should be. Only 10% of company data is "hot" data (repeated access and use), while 30% is "warm" data (semi-periodical use), and the other 60% is cold storage, which is rarely accessed. The problem is that the data is scattered all over the place and is often distributed in the wrong layer. Many storage companies are focusing on deduplication rather than storage tiers. A startup called Spectra Logic is addressing this issue, and if it is successful, hope that HP and Dell can also make a fuss. 8. Data Expansion Part 2 IDC predicts that by 2025, the total global data transmission volume will reach 175 ZB, and now it has reached 32 ZB, most of which are not used. There was a time when a data warehouse decided to classify, process, and store data as something useful. Today, people are populating data lakes with endless data from more and more sources such as social media and the Internet of Things. People need to work hard. If you understand petabytes of data lake garbage, and start to become more picky about their storage. They will question the reasons behind spending large amounts of money on hard drives and storage arrays to store large amounts of unused and worthless data. People will go back to the data warehouse model that keeps the data available, otherwise they are at a loss. 9. More servers mix processors Ten years ago, it didn't matter whether a server was defined as a Xeon tower server or a four-socket rack server in a cabinet. They were all based on x86 processors. But now, people are seeing more server designs that use onboard GPUs, Arm processors, artificial intelligence accelerators, and network accelerators. This requires some changes to the server design. First, as a large number of chips run faster and hotter in confined spaces, liquid cooling technology will become more necessary. Second, the software stack needs to be more robust to handle all these chips, which requires more work from Microsoft and Linux companies. 10.IT workload will change Don't think that automation means that people are playing games on the iPhone. Due to its evolving system, IT professionals will face many new challenges, including: • Fight against shadow IT. • Address digital transformation. • Develop an artificial intelligence strategy to keep up with competitors. • Properly respond to the impact of new artificial intelligence strategies. • Maintain business security governance. • Handle increasing data inflows and figure out how to handle it. • Respond to customer and company reputation on social media faster than ever.

    2019 12/16

SEE MORE

Email to this supplier

-
SEND

Browse by: All Products | China Suppliers Service is provided by Bossgoo.com

Copyright © 2008-2024 Bossgoo Co., Ltd. All rights reserved.

Your use of this website constitutes acknowledgement and acceptance of our Terms & Conditions