The great cloud migration, which began about a decade ago, brought about a significant revolution in the field of IT. Initially, small startups and businesses without the means to build and manage physical infrastructure were the primary users of cloud services. Additionally, companies saw the benefits of moving collaboration services to a managed infrastructure, leveraging the scalability and cost-effectiveness of public cloud services. This environment enabled cloud-native startups like Uber and Airbnb to thrive and grow rapidly.
In the subsequent years, a vast number of enterprises embraced cloud technology, driven by its ability to reduce costs and accelerate innovation. Many companies adopted “cloud-first” strategies, leading to a wholesale migration of their infrastructures to cloud service providers. This shift represented a paradigm change in IT operations.
However, as the cloud-first strategies matured, certain limitations and challenges have emerged. The efficacy of these strategies is now being questioned, and returns on investment (ROIs) are diminishing, resulting in a significant backlash against cloud adoption. This backlash is primarily driven by three key factors: escalating costs, increasing complexity, and vendor lock-in.
The widespread adoption of the cloud has led to a phenomenon known as “cloud sprawl,” where the sheer volume of workloads in the cloud has caused expenses to skyrocket. Data-intensive processes such as shop floor machine data collection should never have been considered for the cloud. Manufacturers are finding that datasets of hundreds of gigabytes should never have left the premises. Enterprises are now running critical computing workloads, storing massive volumes of data, and executing resource-intensive programs such as machine learning (ML), artificial intelligence (AI), and deep learning on cloud platforms. These activities come with substantial costs, especially considering the need for high-performance resources like GPUs and large storage capacities.
In some cases, companies spend up to twice as much on cloud services as their previous on-premises systems. This significant cost increase has sparked a realization that the cloud is not always the most cost-effective solution. As a result, a growing number of sophisticated enterprises are exploring hybrid strategies, which involve repatriating workloads from the cloud back to on-premises systems.
By developing true hybrid strategies, organizations aim to leverage the benefits of both cloud and on-premises systems. This approach allows them to optimize their IT infrastructure based on the specific requirements of different workloads and data science initiatives. Moreover, hybrid strategies offer greater control over costs, reduced complexity, and increased flexibility to avoid vendor lock-in.
In fact, leading technology companies like Nvidia have estimated that moving large and specialized AI and ML workloads back on premises can result in significant savings, potentially reducing expenses by around 30%.
In conclusion, while the great cloud migration brought undeniable advantages in terms of scalability and innovation, the limitations and challenges associated with cloud-first strategies have triggered a backlash. To address these issues, enterprises are embracing hybrid strategies, repatriating critical workloads to on-premises systems and leveraging the benefits of cloud and traditional infrastructure. This evolution represents the next generational leap in IT, enabling organizations to support their increasingly business-critical data science initiatives while regaining control over costs and complexity. If your organization has data being collected and stored in the cloud, you may want to start to plan to migrate that ever-growing data back to on-premise and mitigate the costs. If your organization is thinking of a cloud solution, think again.
Resource: https://techcrunch.com/2023/03/20/the-cloud-backlash-has-begun-why-big-data-is-pulling-compute-back-on-premises/?cx_testId=6&cx_testVariant=cx_1&cx_artPos=3#cxrecs_s
Thomas Robinson is COO of Domino Data Lab,
Exposing the Vulnerabilities of Cloud Environments: Embrace On-Premise Machine Monitoring Systems for Enhanced Security
/in Articles, BlogAs organizations navigate the treacherous landscape of data breaches in cloud environments, it becomes evident that the illusion of security is shattered. The Verizon Data Breach Investigations Report (DBIR) 2020 reveals an alarming surge of 43% in web application breaches, with over 80% of these incidents leveraging stolen credentials [¹]. Compounding the issue, nearly a quarter of all breaches involved cloud assets, with compromised credentials responsible for a staggering 77% of these cases [²].
Amidst these vulnerabilities, a stark reality emerges: the reliance on cloud vendors and third parties exposes organizations to potential security gaps beyond their control. The lack of complete oversight in securing and protecting access to data within cloud environments raises concerns about maintaining a robust security posture.
To address these challenges and avoid exposing the organization to expanding threats, an alternative solution presents itself: embracing on-premise machine monitoring systems. By adopting an on-premise approach, organizations regain control over their data security and mitigate the risks associated with cloud environments.
An on-premise machine monitoring system empowers organizations to establish stringent measures within their own infrastructure. By safeguarding sensitive information in their secure environment, organizations eliminate the vulnerabilities inherent in relying solely on cloud platforms. With complete control over data management, access controls, and security protocols, organizations can proactively safeguard against stolen credential data breaches.
Moreover, on-premise machine monitoring systems seamlessly integrate with existing internal IT security measures. By augmenting robust password policies and implementing multi-factor authentication (MFA) for all users, organizations fortify their defense mechanisms. Combining technology-driven solutions with comprehensive security training for employees further strengthens the overall security posture. By equipping users with knowledge and tools to identify and thwart social engineering attacks, such as phishing and vishing, organizations can effectively diminish the risk of compromised credentials.
Embracing an on-premise machine monitoring system not only addresses the vulnerabilities of cloud environments but also empowers organizations to take charge of their data security. By investing in their own infrastructure, organizations regain control over their security landscape, mitigating the risks posed by expanding threats.
In conclusion, the vulnerabilities of cloud environments and the reliance on cloud vendors and third parties necessitate a strategic shift towards on-premise machine monitoring systems. By adopting this alternative solution, organizations regain control over their data security, reduce the risks of stolen credential data breaches, and reinforce their overall security posture.
References: [¹] “Verizon DBIR 2020: Credential Theft, Phishing, Cloud Attacks” – CyberArk. Available at: [Link to the source] [²] “Stolen credentials, cloud misconfiguration are most common causes of breaches: study” – IT World Canada. Available at: [Link to the source] [³] “Tackling The Double Threat From Ransomware And Stolen Credentials” – Forbes. Available at: [Link to the source] [⁴] “How to Prevent Stolen Credentials in the Cloud” – CSO Online. Available at: [Link to the source]
Critique on the Negative Implications of Cloud Computing
/in Articles, BlogIntroduction: Cloud computing has undoubtedly revolutionized the IT industry, offering numerous benefits such as scalability, flexibility, and increased accessibility. However, it is essential to critically analyze the negative implications associated with this technology. This critique explores the potential downsides of cloud computing, focusing on the high costs and hidden expenses highlighted in several articles.
Conclusion: While cloud computing has undoubtedly brought significant advancements, it is crucial to consider the negative implications associated with this technology. The critique has shed light on the high costs and hidden expenses, including budget overruns, hidden fees, and diminishing ROI. Additionally, the issue of vendor lock-in can hinder organizations’ flexibility and strategic decision-making. By recognizing these challenges, organizations can better prepare and strategize to mitigate the negative implications while leveraging the benefits of cloud computing effectively.
References:
The Cloud Backlash Has Begun
/in Articles, BlogThe great cloud migration, which began about a decade ago, brought about a significant revolution in the field of IT. Initially, small startups and businesses without the means to build and manage physical infrastructure were the primary users of cloud services. Additionally, companies saw the benefits of moving collaboration services to a managed infrastructure, leveraging the scalability and cost-effectiveness of public cloud services. This environment enabled cloud-native startups like Uber and Airbnb to thrive and grow rapidly.
In the subsequent years, a vast number of enterprises embraced cloud technology, driven by its ability to reduce costs and accelerate innovation. Many companies adopted “cloud-first” strategies, leading to a wholesale migration of their infrastructures to cloud service providers. This shift represented a paradigm change in IT operations.
However, as the cloud-first strategies matured, certain limitations and challenges have emerged. The efficacy of these strategies is now being questioned, and returns on investment (ROIs) are diminishing, resulting in a significant backlash against cloud adoption. This backlash is primarily driven by three key factors: escalating costs, increasing complexity, and vendor lock-in.
The widespread adoption of the cloud has led to a phenomenon known as “cloud sprawl,” where the sheer volume of workloads in the cloud has caused expenses to skyrocket. Data-intensive processes such as shop floor machine data collection should never have been considered for the cloud. Manufacturers are finding that datasets of hundreds of gigabytes should never have left the premises. Enterprises are now running critical computing workloads, storing massive volumes of data, and executing resource-intensive programs such as machine learning (ML), artificial intelligence (AI), and deep learning on cloud platforms. These activities come with substantial costs, especially considering the need for high-performance resources like GPUs and large storage capacities.
In some cases, companies spend up to twice as much on cloud services as their previous on-premises systems. This significant cost increase has sparked a realization that the cloud is not always the most cost-effective solution. As a result, a growing number of sophisticated enterprises are exploring hybrid strategies, which involve repatriating workloads from the cloud back to on-premises systems.
By developing true hybrid strategies, organizations aim to leverage the benefits of both cloud and on-premises systems. This approach allows them to optimize their IT infrastructure based on the specific requirements of different workloads and data science initiatives. Moreover, hybrid strategies offer greater control over costs, reduced complexity, and increased flexibility to avoid vendor lock-in.
In fact, leading technology companies like Nvidia have estimated that moving large and specialized AI and ML workloads back on premises can result in significant savings, potentially reducing expenses by around 30%.
In conclusion, while the great cloud migration brought undeniable advantages in terms of scalability and innovation, the limitations and challenges associated with cloud-first strategies have triggered a backlash. To address these issues, enterprises are embracing hybrid strategies, repatriating critical workloads to on-premises systems and leveraging the benefits of cloud and traditional infrastructure. This evolution represents the next generational leap in IT, enabling organizations to support their increasingly business-critical data science initiatives while regaining control over costs and complexity. If your organization has data being collected and stored in the cloud, you may want to start to plan to migrate that ever-growing data back to on-premise and mitigate the costs. If your organization is thinking of a cloud solution, think again.
Resource: https://techcrunch.com/2023/03/20/the-cloud-backlash-has-begun-why-big-data-is-pulling-compute-back-on-premises/?cx_testId=6&cx_testVariant=cx_1&cx_artPos=3#cxrecs_s
Thomas Robinson is COO of Domino Data Lab,
What Is Continuous Improvement?
/in Articles, BlogContinuous improvement projects are initiatives undertaken by organizations to enhance processes, products, or services incrementally over time. The goal is to achieve small, ongoing improvements that can bring significant long-term benefits. These projects are typically driven by a structured approach that involves identifying areas for improvement, implementing changes, and evaluating the results to guide further improvements. Here are some key aspects and strategies related to continuous improvement projects:
Continuous improvement projects are fundamental to many organizations, enabling them to adapt, innovate, and stay competitive in a rapidly changing environment. By fostering a culture of continuous improvement, organizations can drive incremental enhancements that lead to long-term success.
Downtime is Inevitable. Unplanned Downtime does not have to be.
/in Articles, BlogDowntime is an Inevitable Aspect, but Unplanned Downtime Can Be Prevented. Downtime and production losses are something every manufacturer experiences. The good news is technology solutions like MERLIN are available that dramatically reduce the main sources of revenue loss: Unplanned Downtime, Minor Stoppages, and Changeover Time.
When solutions like MERLIN are implemented, manufacturers quickly realize how much time and revenue is lost with traditional strategies that are manual, time-consuming, and ineffective.
Based on more than 25 years of experience in manufacturing, we’ve outlined the top 3 profit killers in the industry and how they can be avoided.
Minor stoppages are typically the most hidden factors of profit loss, with dramatically more impact on downtime and revenue than manufacturers realize.
Traditional manual, paper-based systems rarely capture minor stoppages, and the data is often unreliable.
MERLIN, along with its IIOT technology solutions, captures every downtime event and the root cause of each stoppage.
Example: A packaging manufacturer manually tracked stoppages but only captured unplanned downtime lasting 5 minutes or more.
The manufacturer implemented MERLIN’s Tempus Enterprise Edition platform to gain real-time visibility into machine-level performance, including all stoppages.
MERLIN identified micro stops in just one week, totalling 7 hours. These were unplanned stops that were previously not recorded. The platform also alerted operators at the time of each stoppage so problems could be fixed as they happened.
Downtime is the largest source of lost production time and revenue. Yet, it’s estimated that 80% of manufacturers cannot accurately calculate their downtime or understand the costs associated with lost production.
MERLIN Tempus provides real-time insight into the source of unplanned downtime, including which machines have the most occurring faults and the most aggregated downtime.
Changeover time accounts for the largest source of overall downtime. Yet, most manufacturers have little insight into how long changeovers take or what they can do to reduce changeover time.
A SMED initiative (Single Minute Exchange of Dies) is the standard technique for analyzing and reducing the time it takes to complete equipment changeovers. Most SMED initiatives are manual projects using Excel spreadsheets and stopwatches.
MERLIN Tempus accurately compares estimated changeovers vs actual and accelerates cost savings.
Are you ready to stop the profit killers in your manufacturing organization? It’s easier than you think. Rapid implementation of MERLIN Tempus means you’ll have visibility into your plant, line, and machine data in just days! Contact an expert from Memex today to learn more.