Source:


By Nicholas Fearn

An AWS tech stack can aid business growth and facilitate efficient operations, but misconfigurations have become all too common and stall this progress

Amazon Web Services (AWS) has become the lifeblood of millions of modern businesses, both big and small. But while this popular cloud platform enables them to manage and scale their operations with impressive speed, simplicity and affordability, it also represents a significant security and privacy risk if mismanaged by users.

An insecure or improperly configured AWS tech stack provides a gateway for cyber criminals to enter corporate systems and sensitive files. The biggest example of this occurred in 2019, when an ex-Amazon employee stole the data of 100 million Capital One customers simply by exploiting a misconfigured web application firewall in the financial service giant’s AWS tech stack.

The incident ended with a high-profile lawsuit in which the financial services giant had to pay a $190m (£140m) settlement to affected customers. Other big businesses impacted by similar incidents include Accenture, Facebook, LinkedIn, Pegasus Airlines, Uber and Twilio. So, what can organisations do to secure their AWS tech stacks?

One of the biggest risks of an insecure AWS tech stack is data theft and exfiltration by cyber criminals, according to Rik Turner, chief cyber security analyst at Omdia. He explains this can happen when S3 buckets, which contain large volumes of files and sensitive metadata, aren’t set up properly.

As a result, S3 bucket access rights can be granted to employees who don’t require them for their roles, leading to insider threats. Or, worse, these crucial storage objects can end up on the public internet for anyone to access and abuse.

Sensitive corporate and customer data exposed in this way can lead to businesses experiencing “enormous financial losses”, says Sylvester Kaczmarek, a professor at online higher education provider the Open Institute of Technology. Their finances take a hit through regulatory fines, customer lawsuits and expensive recovery efforts that can last for months. Reputational damage is often substantial, too.

Additionally, weak or reused user credentials, the absence of cyber security logging and monitoring capabilities, and weaknesses in cyber defences like firewalls leave AWS tech stacks dangerously exposed to data breaches, he adds.

Data breaches can also stem from poorly secured Relational Database Service databases, Elastic Compute Cloud (EC2) instances and application programming interfaces, explains Bob McCarter, chief technology officer of risk and compliance software provider Navex. Erroneous identity and access management policies, a lack of multi-factor authentication, unpatched software and open ports are common security issues affecting these AWS services.

Besides costly data breaches, the day-to-day operations of modern businesses can grind to a halt in the aftermath of an EC2 instance compromise. The latter results in “impaired performance”, and even “a complete malfunctioning” of critical applications and workloads, explains Turner.

These issues are largely the product of mistakes made by AWS users and not cyber attacks targeted at Amazon, according to Neil MacDonald, vice-president and distinguished analyst at Gartner. But he emphasises that mistakes can easily happen due to the “sheer size, complexity and rate of change of AWS deployments”, adding that they are “impossible” to monitor without using appropriate security tools from AWS or other technology companies.

It is, therefore, the responsibility of AWS users to take steps to protect the data they upload to AWS cloud resources. This is enshrined in the cloud security shared responsibility model, with the responsibility of cloud companies like AWS being to secure the infrastructure they sell to customers.

Best practices to secure AWS tech stacks

When it comes to securing AWS tech stacks, many effective best practices are laid out in the AWS Well-Architected framework. McCarter explains that it offers a comprehensive guide for access management, infrastructure management, data privacy, application security, and cyber threat monitoring and detection.

Crystal Morin, cyber security strategist at cloud security company Sysdig, is another vocal supporter of this framework. She says it’s great for handling the prevention, protection, detection and response sides of cyber security. “This model helps you think through how to prevent problems in the first place, ensure your workloads have security in place, and then have the right tools in place to detect and respond to cloud security threats if and when they do take place,” says Morin.

As well as adhering to AWS’s own security best practices, MacDonald points out that the Center for Internet Security also offers advice for creating and maintaining a secure AWS tech stack. He adds that many modern cyber security tools are aligned with the latest AWS best practices, whether provided by Amazon or an outside organisation.

Given that lots of AWS-related security incidents are caused by inadequate access controls, Jake Moore – global cyber security advisor at antivirus maker ESET – urges organisations to implement the principle of least privilege to ensure access rights are limited to those who require them for their roles. This should be enforced as part of a wider identity and access management strategy.

Of course, staff hiring, attrition and promotion can make it difficult to manage AWS access controls. Still, Moore says businesses can use cyber security monitoring tools to track these changes and ensure access controls are amended accordingly, minimising security incidents. In addition to investing in these tools, he urges organisations with AWS stacks to regularly audit their cyber security posture to ensure security gaps are identified and closed swiftly. Automated analysis tools can help with this.

To ensure cyber criminals can’t steal sensitive data stored on and travelling between AWS servers, OPIT’s Kaczmarek says organisations must encrypt data when it’s at rest and in transit. Utilising the AWS Key Management service will help protect data at rest. Meanwhile, tight network security configurations are the key to securing transit data and wider network traffic. These should apply for virtual private clouds, Security Groups and Network Access Control Lists, according to Kaczmarek.

Organisations operating AWS tech stacks can log all network traffic using AWS CloudTrail and monitor it using AWS CloudWatch, says Kaczmarek. He adds that these efforts can be complemented by using multi-factor authentication, implementing security patches when they’re issued and replacing manual processes with infrastructure as code. The previous step is paramount for “consistency and auditing”, he claims.

 

Read the full article below:

Related posts

Agenda Digitale: Regenerative Business – The Future of Business Is Net-Positive
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Dec 8, 2025 5 min read

Source:


The net-positive model transcends traditional sustainability by aiming to generate more value than is consumed. Blockchain, AI, and IoT enable scalable circular models. Case studies demonstrate how profitability and positive impact combine to regenerate business and the environment.

By Francesco Derchi, Professor and Area Chair in Digital Business @ OPIT – Open Institute of Technology

In recent years, the word ” sustainability ” has become a firm fixture in the corporate lexicon. However, simply “doing no harm” is no longer enough: the climate crisis , social inequalities , and the erosion of natural resources require a change of pace. This is where the net-positive paradigm comes in , a model that isn’t content to simply reduce negative impacts, but aims to generate more social and environmental value than is consumed.

This isn’t about philanthropy, nor is it about reputational makeovers: net-positive is a strategic approach that intertwines economics, technology, and corporate culture. Within this framework, digitalization becomes an essential lever, capable of enabling regenerative models through circular platforms and exponential technologies.

Blockchain, AI, and IoT: The Technological Triad of Regeneration

Blockchain, Artificial Intelligence, and the Internet of Things represent the technological triad that makes this paradigm shift possible. Each addresses a critical point in regeneration.

Blockchain guarantees the traceability of material flows and product life cycles, allowing a regenerated dress or a bottle collected at sea to tell their story in a transparent and verifiable way.

Artificial Intelligence optimizes recovery and redistribution chains, predicting supply and demand, reducing waste and improving the efficiency of circular processes .

Finally, IoT enables real-time monitoring, from sensors installed at recycling plants to sharing mobility platforms, returning granular data for quick, informed decisions.

These integrated technologies allow us to move beyond linear vision and enable systems in which value is continuously regenerated.

New business models: from product-as-a-service to incentive tokens

Digital regeneration is n’t limited to the technological dimension; it’s redefining business models. More and more companies are adopting product-as-a-service approaches , transforming goods into services: from technical clothing rentals to pay-per-use for industrial machinery. This approach reduces resource consumption and encourages modular design, designed for reuse.

At the same time, circular marketplaces create ecosystems where materials, components, and products find new life. No longer waste, but input for other production processes. The logic of scarcity is overturned in an economy of regenerated abundance.

To complete the picture, incentive tokens — digital tools that reward virtuous behavior, from collecting plastic from the sea to reusing used clothing — activate global communities and catalyze private capital for regeneration.

Measuring Impact: Integrated Metrics for Net-Positiveness

One of the main obstacles to the widespread adoption of net-positive models is the difficulty of measuring their impact. Traditional profit-focused accounting systems are not enough. They need to be combined with integrated metrics that combine ESG and ROI, such as impact-weighted accounting or innovative indicators like lifetime carbon savings.

In this way, companies can validate the scalability of their models and attract investors who are increasingly attentive to financial returns that go hand in hand with social and environmental returns.

Case studies: RePlanet Energy, RIFO, and Ogyre

Concrete examples demonstrate how the combination of circular platforms and exponential technologies can generate real value. RePlanet Energy has defined its Massive Transformative Purpose as “Enabling Regeneration” and is now providing sustainable energy to Nigerian schools and hospitals, thanks in part to transparent blockchain-based supply chains and the active contribution of employees. RIFO, a Tuscan circular fashion brand, regenerates textile waste into new clothing, supporting local artisans and promoting workplace inclusion, with transparency in the production process as a distinctive feature and driver of loyalty. Ogyre incentivizes fishermen to collect plastic during their fishing trips; the recovered material is digitally tracked and transformed into new products, while the global community participates through tokens and environmental compensation programs.

These cases demonstrate how regeneration and profitability are not contradictory, but can actually feed off each other, strengthening the competitiveness of businesses.

From Net Zero to Net Positive: The Role of Massive Transformative Purpose

The crucial point lies in the distinction between sustainability and regeneration. The former aims for net zero, that is, reducing the impact until it is completely neutralized. The latter goes further, aiming for a net positive, capable of giving back more than it consumes.

This shift in perspective requires a strong Massive Transformative Purpose: an inspiring and shared goal that guides strategic choices, preventing technology from becoming a sterile end. Without this level of intentionality, even the most advanced tools risk turning into gadgets with no impact.

Regenerating business also means regenerating skills to train a new generation of professionals capable not only of using technologies but also of directing them towards regenerative business models. From this perspective, training becomes the first step in a transformation that is simultaneously cultural, economic, and social.

The Regenerative Future: Technology, Skills, and Shared Value

Digital regeneration is not an abstract concept, but a concrete practice already being tested by companies in Europe and around the world. It’s an opportunity for businesses to redefine their role, moving from mere economic operators to drivers of net-positive value for society and the environment.

The combination of blockchainAI, and IoT with circular product-as-a-service models, marketplaces, and incentive tokens can enable scalable and sustainable regenerative ecosystems. The future of business isn’t just measured in terms of margins, but in the ability to leave the world better than we found it.

Read the full article below (in Italian):

Read the article
Raconteur: AI on your terms – meet the enterprise-ready AI operating model
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Nov 18, 2025 5 min read

Source:

  • Raconteur, published on November 06th, 2025

What is the AI technology operating model – and why does it matter? A well-designed AI operating model provides the structure, governance and cultural alignment needed to turn pilot projects into enterprise-wide transformation

By Duncan Jefferies

Many firms have conducted successful Artificial Intelligence (AI) pilot projects, but scaling them across departments and workflows remains a challenge. Inference costs, data silos, talent gaps and poor alignment with business strategy are just some of the issues that leave organisations trapped in pilot purgatory. This inability to scale successful experiments means AI’s potential for improving enterprise efficiency, decision-making and innovation isn’t fully realised. So what’s the solution?

Although it’s not a magic bullet, an AI operating model is really the foundation for scaling pilot projects up to enterprise-wide deployments. Essentially it’s a structured framework that defines how the organisation develops, deploys and governs AI. By bringing together infrastructure, data, people, and governance in a flexible and secure way, it ensures that AI delivers value at scale while remaining ethical and compliant.

“A successful AI proof-of-concept is like building a single race car that can go fast,” says Professor Yu Xiong, chair of business analytics at the UK-based Surrey Business School. “An efficient AI technology operations model, however, is the entire system – the processes, tools, and team structures – for continuously manufacturing, maintaining, and safely operating an entire fleet of cars.”

But while the importance of this framework is clear, how should enterprises establish and embed it?

“It begins with a clear strategy that defines objectives, desired outcomes, and measurable success criteria, such as model performance, bias detection, and regulatory compliance metrics,” says Professor Azadeh Haratiannezhadi, co-founder of generative AI company Taktify and professor of generative AI in cybersecurity at OPIT – the Open Institute of Technology.

Platforms, tools and MLOps pipelines that enable models to be deployed, monitored and scaled in a safe and efficient way are also essential in practical terms.

“Tools and infrastructure must also be selected with transparency, cost, and governance in mind,” says Efrain Ruh, continental chief technology officer for Europe at Digitate. “Crucially, organisations need to continuously monitor the evolving AI landscape and adapt their models to new capabilities and market offerings.”

An open approach

The most effective AI operating models are also founded on openness, interoperability and modularity. Open source platforms and tools provide greater control over data, deployment environments and costs, for example. These characteristics can help enterprises to avoid vendor lock-in, successfully align AI to business culture and values, and embed it safely into cross-department workflows.

“Modularity and platformisation…avoids building isolated ‘silos’ for each project,” explains professor Xiong. “Instead, it provides a shared, reusable ‘AI platform’ that integrates toolchains for data preparation, model training, deployment, monitoring, and retraining. This drastically improves efficiency and reduces the cost of redundant work.”

A strong data strategy is equally vital for ensuring high-quality performance and reducing bias. Ideally, the AI operating model should be cloud and LLM agnostic too.

“This allows organisations to coordinate and orchestrate AI agents from various sources, whether that’s internal or 3rd party,” says Babak Hodjat, global chief technology officer of AI at Cognizant. “The interoperability also means businesses can adopt an agile iterative process for AI projects that is guided by measuring efficiency, productivity, and quality gains, while guaranteeing trust and safety are built into all elements of design and implementation.”

A robust AI operating model should feature clear objectives for compliance, security and data privacy, as well as accountability structures. Richard Corbridge, chief information officer of Segro, advises organisations to: “Start small with well-scoped pilots that solve real pain points, then bake in repeatable patterns, data contracts, test harnesses, explainability checks and rollback plans, so learning can be scaled without multiplying risk. If you don’t codify how models are approved, deployed, monitored and retired, you won’t get past pilot purgatory.”

Of course, technology alone can’t drive successful AI adoption at scale: the right skills and culture are also essential for embedding AI across the enterprise.

“Multidisciplinary teams that combine technical expertise in AI, security, and governance with deep business knowledge create a foundation for sustainable adoption,” says Professor Haratiannezhadi. “Ongoing training ensures staff acquire advanced AI skills while understanding associated risks and responsibilities.”

Ultimately, an AI operating model is the playbook that enables an enterprise to use AI responsibly and effectively at scale. By drawing together governance, technological infrastructure, cultural change and open collaboration, it supports the shift from isolated experiments to the kind of sustainable AI capability that can drive competitive advantage.

In other words, it’s the foundation for turning ambition into reality, and finally escaping pilot purgatory for good.

 

Read the full article below:

Read the article