Source:


By Nicholas Fearn

An AWS tech stack can aid business growth and facilitate efficient operations, but misconfigurations have become all too common and stall this progress

Amazon Web Services (AWS) has become the lifeblood of millions of modern businesses, both big and small. But while this popular cloud platform enables them to manage and scale their operations with impressive speed, simplicity and affordability, it also represents a significant security and privacy risk if mismanaged by users.

An insecure or improperly configured AWS tech stack provides a gateway for cyber criminals to enter corporate systems and sensitive files. The biggest example of this occurred in 2019, when an ex-Amazon employee stole the data of 100 million Capital One customers simply by exploiting a misconfigured web application firewall in the financial service giant’s AWS tech stack.

The incident ended with a high-profile lawsuit in which the financial services giant had to pay a $190m (£140m) settlement to affected customers. Other big businesses impacted by similar incidents include Accenture, Facebook, LinkedIn, Pegasus Airlines, Uber and Twilio. So, what can organisations do to secure their AWS tech stacks?

One of the biggest risks of an insecure AWS tech stack is data theft and exfiltration by cyber criminals, according to Rik Turner, chief cyber security analyst at Omdia. He explains this can happen when S3 buckets, which contain large volumes of files and sensitive metadata, aren’t set up properly.

As a result, S3 bucket access rights can be granted to employees who don’t require them for their roles, leading to insider threats. Or, worse, these crucial storage objects can end up on the public internet for anyone to access and abuse.

Sensitive corporate and customer data exposed in this way can lead to businesses experiencing “enormous financial losses”, says Sylvester Kaczmarek, a professor at online higher education provider the Open Institute of Technology. Their finances take a hit through regulatory fines, customer lawsuits and expensive recovery efforts that can last for months. Reputational damage is often substantial, too.

Additionally, weak or reused user credentials, the absence of cyber security logging and monitoring capabilities, and weaknesses in cyber defences like firewalls leave AWS tech stacks dangerously exposed to data breaches, he adds.

Data breaches can also stem from poorly secured Relational Database Service databases, Elastic Compute Cloud (EC2) instances and application programming interfaces, explains Bob McCarter, chief technology officer of risk and compliance software provider Navex. Erroneous identity and access management policies, a lack of multi-factor authentication, unpatched software and open ports are common security issues affecting these AWS services.

Besides costly data breaches, the day-to-day operations of modern businesses can grind to a halt in the aftermath of an EC2 instance compromise. The latter results in “impaired performance”, and even “a complete malfunctioning” of critical applications and workloads, explains Turner.

These issues are largely the product of mistakes made by AWS users and not cyber attacks targeted at Amazon, according to Neil MacDonald, vice-president and distinguished analyst at Gartner. But he emphasises that mistakes can easily happen due to the “sheer size, complexity and rate of change of AWS deployments”, adding that they are “impossible” to monitor without using appropriate security tools from AWS or other technology companies.

It is, therefore, the responsibility of AWS users to take steps to protect the data they upload to AWS cloud resources. This is enshrined in the cloud security shared responsibility model, with the responsibility of cloud companies like AWS being to secure the infrastructure they sell to customers.

Best practices to secure AWS tech stacks

When it comes to securing AWS tech stacks, many effective best practices are laid out in the AWS Well-Architected framework. McCarter explains that it offers a comprehensive guide for access management, infrastructure management, data privacy, application security, and cyber threat monitoring and detection.

Crystal Morin, cyber security strategist at cloud security company Sysdig, is another vocal supporter of this framework. She says it’s great for handling the prevention, protection, detection and response sides of cyber security. “This model helps you think through how to prevent problems in the first place, ensure your workloads have security in place, and then have the right tools in place to detect and respond to cloud security threats if and when they do take place,” says Morin.

As well as adhering to AWS’s own security best practices, MacDonald points out that the Center for Internet Security also offers advice for creating and maintaining a secure AWS tech stack. He adds that many modern cyber security tools are aligned with the latest AWS best practices, whether provided by Amazon or an outside organisation.

Given that lots of AWS-related security incidents are caused by inadequate access controls, Jake Moore – global cyber security advisor at antivirus maker ESET – urges organisations to implement the principle of least privilege to ensure access rights are limited to those who require them for their roles. This should be enforced as part of a wider identity and access management strategy.

Of course, staff hiring, attrition and promotion can make it difficult to manage AWS access controls. Still, Moore says businesses can use cyber security monitoring tools to track these changes and ensure access controls are amended accordingly, minimising security incidents. In addition to investing in these tools, he urges organisations with AWS stacks to regularly audit their cyber security posture to ensure security gaps are identified and closed swiftly. Automated analysis tools can help with this.

To ensure cyber criminals can’t steal sensitive data stored on and travelling between AWS servers, OPIT’s Kaczmarek says organisations must encrypt data when it’s at rest and in transit. Utilising the AWS Key Management service will help protect data at rest. Meanwhile, tight network security configurations are the key to securing transit data and wider network traffic. These should apply for virtual private clouds, Security Groups and Network Access Control Lists, according to Kaczmarek.

Organisations operating AWS tech stacks can log all network traffic using AWS CloudTrail and monitor it using AWS CloudWatch, says Kaczmarek. He adds that these efforts can be complemented by using multi-factor authentication, implementing security patches when they’re issued and replacing manual processes with infrastructure as code. The previous step is paramount for “consistency and auditing”, he claims.

 

Read the full article below:

Related posts

ChatGPT Action Figures & Responsible Artificial Intelligence
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Jun 23, 2025 6 min read

You’ve probably seen two of the most recent popular social media trends. The first is creating and posting your personalized action figure version of yourself, complete with personalized accessories, from a yoga mat to your favorite musical instrument. There is also the Studio Ghibli trend, which creates an image of you in the style of a character from one of the animation studio’s popular films.

Both of these are possible thanks to OpenAI’s GPT-4o-powered image generator. But what are you risking when you upload a picture to generate this kind of content? More than you might imagine, according to Tom Vazdar, chair of cybersecurity at the Open Institute of Technology (OPIT), in a recent interview with Wired. Let’s take a closer look at the risks and how this issue ties into the issue of responsible artificial intelligence.

Uploading Your Image

To get a personalized image of yourself back from ChatGPT, you need to upload an actual photo, or potentially multiple images, and tell ChatGPT what you want. But in addition to using your image to generate content for you, OpenAI could also be using your willingly submitted image to help train its AI model. Vazdar, who is also CEO and AI & Cybersecurity Strategist at Riskoria and a board member for the Croatian AI Association, says that this kind of content is “a gold mine for training generative models,” but you have limited power over how that image is integrated into their training strategy.

Plus, you are uploading much more than just an image of yourself. Vazdar reminds us that we are handing over “an entire bundle of metadata.” This includes the EXIF data attached to the image, such as exactly when and where the photo was taken. And your photo may have more content in it than you imagine, with the background – including people, landmarks, and objects – also able to be tied to that time and place.

In addition to this, OpenAI also collects data about the device that you are using to engage with the platform, and, according to Vazdar, “There’s also behavioral data, such as what you typed, what kind of image you asked for, how you interacted with the interface and the frequency of those actions.”

After all that, OpenAI knows a lot about you, and soon, so could their AI model, because it is studying you.

How OpenAI Uses Your Data

OpenAI claims that they did not orchestrate these social media trends simply to get training data for their AI, and that’s almost certainly true. But they also aren’t denying that access to that freely uploaded data is a bonus. As Vazdar points out, “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”

OpenAI isn’t the only company using your data to train its AI. Meta recently updated its privacy policy to allow the company to use your personal information on Meta-related services, such as Facebook, Instagram, and WhatsApp, to train its AI. While it is possible to opt-out, Meta isn’t advertising that fact or making it easy, which means that most users are sharing their data by default.

You can also control what happens with your data when using ChatGPT. Again, while not well publicized, you can use ChatGPT’s self-service tools to access, export, and delete your personal information, and opt out of having your content used to improve OpenAI’s model. Nevertheless, even if you choose these options, it is still worth it to strip data like location and time from images before uploading them and to consider the privacy of any images, including people and objects in the background, before sharing.

Are Data Protection Laws Keeping Up?

OpenAI and Meta need to provide these kinds of opt-outs due to data protection laws, such as GDPR in the EU and the UK. GDPR gives you the right to access or delete your data, and the use of biometric data requires your explicit consent. However, your photo only becomes biometric data when it is processed using a specific technical measure that allows for the unique identification of an individual.

But just because ChatGPT is not using this technology, doesn’t mean that ChatGPT can’t learn a lot about you from your images.

AI and Ethics Concerns

But you might wonder, “Isn’t it a good thing that AI is being trained using a diverse range of photos?” After all, there have been widespread reports in the past of AI struggling to recognize black faces because they have been trained mostly on white faces. Similarly, there have been reports of bias within AI due to the information it receives. Doesn’t sharing from a wide range of users help combat that? Yes, but there is so much more that could be done with that data without your knowledge or consent.

One of the biggest risks is that the data can be manipulated for marketing purposes, not just to get you to buy products, but also potentially to manipulate behavior. Take, for instance, the Cambridge Analytica scandal, which saw AI used to manipulate voters and the proliferation of deepfakes sharing false news.

Vazdar believes that AI should be used to promote human freedom and autonomy, not threaten it. It should be something that benefits humanity in the broadest possible sense, and not just those with the power to develop and profit from AI.

Responsible Artificial Intelligence

OPIT’s Master’s in Responsible AI combines technical expertise with a focus on the ethical implications of AI, diving into questions such as this one. Focusing on real-world applications, the course considers sustainable AI, environmental impact, ethical considerations, and social responsibility.

Completed over three or four 13-week terms, it starts with a foundation in technical artificial intelligence and then moves on to advanced AI applications. Students finish with a Capstone project, which sees them apply what they have learned to real-world problems.

Read the article
Riccardo Ocleppo Tells TEDx Why He Created OPIT
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Jun 23, 2025 6 min read

In May 2025, Riccardo Ocleppo, founder of the Open Institute of Technology (OPIT), gave the audience at TEDx Parma in Italy an insight into why he created OPIT, a new type of university that is quickly becoming essential in preparing students for an increasingly technological future.

Meet Riccardo

Although Riccardo graduated from Politecnico di Torino with a bachelor’s in electronic engineering in 2006 – followed by a master’s degree in 2008 – he felt unprepared for the challenges he felt he had to face as a professional. He sought to expand his vision by completing the master’s program at the London School of Business.

While studying in London, Riccardo became focused on how he could help other students optimize their studies and ensure they were properly prepared for their futures. This resulted in the creation of Docsity, an international online community where university students could exchange study materials to prepare for exams.

Docsity has grown into a global community with 15 million registered students. Moreover, it partners with over 250 universities and business schools worldwide that interview students and provide that information to educational organizations to help them refine their offerings. This experience of working as a conduit between students and universities shaped Riccardo’s understanding of the higher education sector’s needs, eventually leading to the creation of OPIT.

The Challenges Facing Higher Education

In his TEDx talk, Riccardo asked the Parma audience to imagine themselves on their first day of university – sitting in their classroom as their professor explains the concepts that they will learn over the coming years, designed to prepare them for the future.

But, he asked, how long will the skills in your curriculum be relevant? In the past, the skills learned at university would last someone for the rest of their professional lives. But today, with technology changing faster than ever, we have reached the point where we can’t accurately predict what technologies we will be using in five years. It is even more challenging, he said, to predict what kind of knowledge children sitting in classrooms today will need when they reach adulthood.

The inability to predict the skills that students will need in the future or adapt courses quickly enough to include those skills is why many university degrees are no longer fit for purpose, Riccardo explained. Instead, he stated, they are preparing students for a destination that will no longer exist when they graduate while pushing them over a road with a constantly moving target destination.

Building OPIT

With these challenges and his experiences from Docsity in mind, Riccardo set to work designing the kind of education he wished that he had received. He set out to create a university that would allow learners, at any stage in their career, to adapt and reinvent themselves for the changing world. The result was OPIT, which matriculated its first students in 2023.

With that in mind, OPIT courses are built around three pillars.

Pillar One: Bridging Theory and Practice

Universities often produce students with excellent theoretical knowledge of a subject area but with limited ability to apply that knowledge to real-world problems. It is how Riccardo felt about his knowledge and skills when he completed his electronic engineering degree.

OPIT degrees, on the other hand, are designed to provide students with not only a strong technical foundation but also an understanding of and the ability to develop real-world applications.

The OPIT faculty, recruited from some of the world’s leading businesses, play a central role in achieving this. Instead of relying on polished case studies published years after the fact, they use real-life workplace challenges as teaching tools.

Faculty members include practitioners and thought leaders from some of the world’s biggest tech companies, including Zorina Alliata, Principal AI and Generative AI Strategist at Amazon; Khaled Elbehiery, Senior Director and Network Engineer at Charter Communications; Andrea Gozzi, Head of Strategy and Partnership for the Digital Industries Ecosystem at Siemens; and Sabya Dasgupta, Lead Solution Architect at Microsoft.

For MSc programs, students complete this focus on application with the final Capstone Project, which encourages them to apply their knowledge to the real world through an industry internship.

Pillar Two: International and Multidisciplinary

As well as recruiting professors with an international and multidisciplinary profile, OPIT seeks to do the same with the cohort – people working in diverse fields and looking for ways to leverage the same technology to improve what they do. The diversity of the student profile helps break down both educational and industrial silos, encouraging multidisciplinary thinking and unexpected innovation. It can also give students a greater level of cultural awareness, which they may not have encountered before.

Courses involve online meetups between peers, allowing them to share challenges and learn through application. OPIT also hosts online events that allow students to connect with leaders from companies such as Morgan Stanley, PayPal, and Microsoft, to learn about the professional world today and forge networks for the future.

Pillar Three: Education That Fits Your Life

The third pillar of OPIT is that education should be flexible and fit into your life, rather than require you to put the rest of your life on hold to study. This is especially important for established professionals who want to adapt or reinvent themselves but don’t have the luxury of walking away from their work and other responsibilities for a few years to do so.

This is why OPIT courses are online by design – or “remote first,” as many companies brand it. This not only allows students to build study into their existing lives but also to develop experience working remotely as part of a distributed team, which are essential skills in today’s work environment.

OPIT Courses

Today, criteria such as “data literacy” and “comfortable working with AI” are often at the top of job descriptions. With these and other necessary skills in mind, OPIT launched with a BSc in Modern Computer Science and an MSc in Applied Data Science and AI.

Since then, they have also initiated a BSc in Digital Business and MSc degrees in Digital Business and Innovation, Responsible Artificial Intelligence, and Enterprise Cybersecurity. The first cohort of students celebrated their graduation ceremony on March 8, 2025.

Read the article