According to Statista, the U.S. cloud computing industry generated about $206 billion in revenue in 2022. Expand that globally, and the industry has a value of $483.98 billion. Growth is on the horizon, too, with Grand View Research stating that the various types of cloud computing will achieve a compound annual growth rate (CAGR) of 14.1% between 2023 and 2030.

The simple message is that cloud computing applications are big business.

But that won’t mean much to you if you don’t understand the basics of cloud computing infrastructure and how it all works. This article digs into the cloud computing basics so you can better understand what it means to deliver services via the cloud.

The Cloud Computing Definition

Let’s answer the key question immediately – what is cloud computing?

Microsoft defines cloud computing as the delivery of any form of computing services, such as storage or software, over the internet. Taking software as an example, cloud computing allows you to use a company’s software online rather than having to buy it as a standalone package that you install locally on your computer.

For the super dry definition, cloud computing is a model of computing that provides shared computer processing resources and data to computers and other devices on demand over the internet.

Cloud Computing Meaning

Though the cloud computing basics are pretty easy to grasp – you get services over the internet – what it means in a practical context is less clear.

In the past, businesses and individuals needed to buy and install software locally on their computers or servers. This is the typical ownership model. You hand over your money for a physical product, which you can use as you see fit.

You don’t purchase a physical product when using software via the cloud. You also don’t install that product, whatever it may be, physically on your computer. Instead, you receive the services managed directly by the provider, be they storage, software, analytics, or networking, over the internet. You (and your team) usually install a client that connects to the vendor’s servers, which contain all the necessary computational, processing, and storage power.

What Is Cloud Computing With Examples?

Perhaps a better way to understand the concept is with some cloud computing examples. These should give you an idea of what cloud computing looks like in practice:

  • Google Drive – By integrating the Google Docs suite and its collaborative tools, Google Drive lets you create, save, edit, and share files remotely via the internet.
  • Dropbox – The biggest name in cloud storage offers a pay-as-you-use service that enables you to increase your available storage space (or decrease it) depending on your needs.
  • Amazon Web Services (AWS) – Built specifically for coders and programmers, AWS offers access to off-site remote servers.
  • Microsoft Azure – Microsoft markets Azure as the only “consistent hybrid cloud.” This means Azure allows a company to digitize and modernize their existing infrastructure and make it available over the cloud.
  • IBM Cloud – This service incorporates over 170 services, ranging from simple databases to the cloud servers needed to run AI programs.
  • Salesforce – As the biggest name in the customer relationship management space, Salesforce is one of the biggest cloud computing companies. At the most basic level, it lets you maintain databases filled with details about your customers.

Common Cloud Computing Applications

Knowing what cloud computing is won’t help you much if you don’t understand its use cases. Here are a few ways you could use the cloud to enhance your work or personal life:

  • Host websites without needing to keep on-site servers.
  • Store files and data remotely, as you would with Dropbox or Salesforce. Most of these providers also provide backup services for disaster recovery.
  • Recover lost data with off-site storage facilities that update themselves in real-time.
  • Manage a product’s entire development cycle across one workflow, leading to easier bug tracking and fixing alongside quality assurance testing.
  • Collaborate easily using platforms like Google Drive and Dropbox, which allow workers to combine forces on projects as long as they maintain an internet connection.
  • Stream media, especially high-definition video, with cloud setups that provide the resources that an individual may not have built into a single device.

The Basics of Cloud Computing

With the general introduction to cloud computing and its applications out of the way, let’s get down to the technical side. The basics of cloud computing are split into five categories:

  • Infrastructure
  • Services
  • Benefits
  • Types
  • Challenges

Cloud Infrastructure

The interesting thing about cloud infrastructure is that it simulates a physical build. You’re still using the same hardware and applications. Servers are in play, as is networking. But you don’t have the physical hardware at your location because it’s all off-site and stored, maintained, and updated by the cloud provider. You get access to the hardware, and the services it provides, via your internet connection.

So, you have no physical hardware to worry about besides the device you’ll use to access the cloud service.

Off-site servers handle storage, database management, and more. You’ll also have middleware in play, facilitating communication between your device and the cloud provider’s servers. That middleware checks your internet connection and access rights. Think of it like a bridge that connects seemingly disparate pieces of software so they can function seamlessly on a system.

Services

Cloud services are split into three categories:

Infrastructure as a Service (IaaS)

In a traditional IT setup, you have computers, servers, data centers, and networking hardware all combined to keep the front-end systems (i.e., your computers) running. Buying and maintaining that hardware is a huge cost burden for a business.

IaaS offers access to IT infrastructure, with scalability being a critical component, without forcing an IT department to invest in costly hardware. Instead, you can access it all via an internet connection, allowing you to virtualize traditionally physical setups.

Platform as a Service (PaaS)

Imagine having access to an entire IT infrastructure without worrying about all the little tasks that come with it, such as maintenance and software patching. After all, those small tasks build up, which is why the average small business spends an average of 6.9% of its revenue on dealing with IT systems each year.

PaaS reduces those costs significantly by giving you access to cloud services that manage maintenance and patching via the internet. On the simplest level, this may involve automating software updates so you don’t have to manually check when software is out of date.

Software as a Service (SaaS)

If you have a rudimentary understanding of cloud computing, the SaaS model is the one you are likely to understand the most. A cloud provider builds software and makes it available over the internet, with the user paying for access to that software in the form of a subscription. As long as you keep paying your monthly dues, you get access to the software and any updates or patches the service provider implements.

It’s with SaaS that we see the most obvious evolution of the traditional IT model. In the past, you’d pay a one-time fee to buy a piece of software off the shelf, which you then install and maintain yourself. SaaS gives you constant access to the software, its updates, and any new versions as long as you keep paying your subscription. Compare the standalone versions of Microsoft Office with Microsoft Office 365, especially in their range of options, tools, and overall costs.

Benefits of Cloud Computing

The traditional model of buying a thing and owning it worked for years. So, you may wonder why cloud computing services have overtaken traditional models, particularly on the software side of things. The reason is that cloud computing offers several advantages over the old ways of doing things:

  • Cost savings – Cloud models allow companies to spread their spending over the course of a year. It’s the difference between spending $100 on a piece of software versus spending $10 per month to access it. Sure, the one-off fee ends up being less, but paying $10 per month doesn’t sting your bank balance as much.
  • Scalability – Linking directly to cost savings, you don’t need to buy every element of a software to access the features you need when using cloud services. You pay for what you use and increase the money you spend as your business scales and you need deeper access.
  • Mobility – Cloud computing allows you to access documents and services anywhere. Where before, you were tied to your computer desk if you wanted to check or edit a document, you can now access that document on almost any device.
  • Flexibility – Tied closely to mobility, the flexibility that comes from cloud computing is great for users. Employees can head out into the field, access the services they need to serve customers, and send information back to in-house workers or a customer relationship management (CRM) system.
  • Reliability – Owning physical hardware means having to deal with the many problems that can affect that hardware. Malfunctions, viruses, and human error can all compromise a network. Cloud service providers offer reliability based on in-depth expertise and more resources dedicated to their hardware setups.
  • Security – The done-for-you aspect of cloud computing, particularly concerning maintenance and updates, means one less thing for a business to worry about. It also absorbs some of the costs of hardware and IT maintenance personnel.

Types of Cloud Computing

The types of cloud computing are as follows:

  • Public Cloud – The cloud provider manages all hardware and software related to the service it provides to users.
  • Private Cloud – An organization develops its suite of services, all managed via the cloud but only accessible to group members.
  • Hybrid Cloud – Combines a public cloud with on-premises infrastructure, allowing applications to move between each.
  • Community Cloud – While the community cloud has many similarities to a public cloud, it’s restricted to only servicing a limited number of users. For example, a banking service may only get offered to the banking community.

Challenges of Cloud Computing

Many a detractor of cloud computing notes that it isn’t as issue-proof as it may seem. The challenges of cloud computing may outweigh its benefits for some:

  • Security issues related to cloud computing include data privacy, with cloud providers obtaining access to any sensitive information you store on their servers.
  • As more services switch over to the cloud, managing the costs related to every subscription you have can feel like trying to navigate a spider’s web of software.
  • Just because you’re using a cloud-based service, that doesn’t mean said service handles compliance for you.
  • If you don’t perfectly follow a vendor’s terms of service, they can restrict your access to their cloud services remotely. You don’t own anything.
  • You can’t do anything if a service provider’s servers go down. You have to wait for them to fix the issue, leaving you stuck without access to the software for which you’re paying.
  • You can’t call a third party to resolve an issue your systems encounter with the cloud service because the provider is the only one responsible for their product.
  • Changing cloud providers and migrating data can be challenging, so even if one provider doesn’t work well, companies may hesitate to look for other options due to sunk costs.

Cloud Computing Is the Present and Future

For all of the challenges inherent in the cloud computing model, it’s clear that it isn’t going anywhere. Techjury tells us that about 57% of companies moved, or were in the process of moving, their workloads to cloud services in 2022.

That number will only increase as cloud computing grows and develops.

So, let’s leave you with a short note on cloud computing. It’s the latest step in the constant evolution of how tech companies offer their services to users. Questions of ownership aside, it’s a model that students, entrepreneurs, and everyday people must understand.

Related posts

Il Sole 24 Ore: Integrating Artificial Intelligence into the Enterprise – Challenges and Opportunities for CEOs and Management
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Apr 14, 2025 6 min read

Source:


Expert Pierluigi Casale analyzes the adoption of AI by companies, the ethical and regulatory challenges and the differentiated approach between large companies and SMEs

By Gianni Rusconi

Easier said than done: to paraphrase the well-known proverb, and to place it in the increasingly large collection of critical issues and opportunities related to artificial intelligence, the task that CEOs and management have to adequately integrate this technology into the company is indeed difficult. Pierluigi Casale, professor at OPIT (Open Institute of Technology, an academic institution founded two years ago and specialized in the field of Computer Science) and technical consultant to the European Parliament for the implementation and regulation of AI, is among those who contributed to the definition of the AI ​​Act, providing advice on aspects of safety and civil liability. His task, in short, is to ensure that the adoption of artificial intelligence (primarily within the parliamentary committees operating in Brussels) is not only efficient, but also ethical and compliant with regulations. And, obviously, his is not an easy task.

The experience gained over the last 15 years in the field of machine learning and the role played in organizations such as Europol and in leading technology companies are the requirements that Casale brings to the table to balance the needs of EU bodies with the pressure exerted by American Big Tech and to preserve an independent approach to the regulation of artificial intelligence. A technology, it is worth remembering, that implies broad and diversified knowledge, ranging from the regulatory/application spectrum to geopolitical issues, from computational limitations (common to European companies and public institutions) to the challenges related to training large-format language models.

CEOs and AI

When we specifically asked how CEOs and C-suites are “digesting” AI in terms of ethics, safety and responsibility, Casale did not shy away, framing the topic based on his own professional career. “I have noticed two trends in particular: the first concerns companies that started using artificial intelligence before the AI ​​Act and that today have the need, as well as the obligation, to adapt to the new ethical framework to be compliant and avoid sanctions; the second concerns companies, like the Italian ones, that are only now approaching this topic, often in terms of experimental and incomplete projects (the expression used literally is “proof of concept”, ed.) and without these having produced value. In this case, the ethical and regulatory component is integrated into the adoption process.”

In general, according to Casale, there is still a lot to do even from a purely regulatory perspective, due to the fact that there is not a total coherence of vision among the different countries and there is not the same speed in implementing the indications. Spain, in this regard, is setting an example, having established (with a royal decree of 8 November 2023) a dedicated “sandbox”, i.e. a regulatory experimentation space for artificial intelligence through the creation of a controlled test environment in the development and pre-marketing phase of some artificial intelligence systems, in order to verify compliance with the requirements and obligations set out in the AI ​​Act and to guide companies towards a path of regulated adoption of the technology.

Read the full article below (in Italian):

Read the article
The Lucky Future: How AI Aims to Change Everything
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Apr 10, 2025 7 min read

There is no question that the spread of artificial intelligence (AI) is having a profound impact on nearly every aspect of our lives.

But is an AI-powered future one to be feared, or does AI offer the promise of a “lucky future.”

That “lucky future” prediction comes from Zorina Alliata, principal AI Strategist at Amazon and AI faculty member at Georgetown University and the Open Institute of Technology (OPIT), in her recent webinar “The Lucky Future: How AI Aims to Change Everything” (February 18, 2025).

However, according to Alliata, such a future depends on how the technology develops and whether strategies can be implemented to mitigate the risks.

How AI Aims to Change Everything

For many people, AI is already changing the way they work. However, more broadly, AI has profoundly impacted how we consume information.

From the curation of a social media feed and the summary answer to a search query from Gemini at the top of your Google results page to the AI-powered chatbot that resolves your customer service issues, AI has quickly and quietly infiltrated nearly every aspect of our lives in the past few years.

While there have been significant concerns recently about the possibly negative impact of AI, Alliata’s “lucky future” prediction takes these fears into account. As she detailed in her webinar, a future with AI will have to take into consideration:

  • Where we are currently with AI and future trajectories
  • The impact AI is having on the job landscape
  • Sustainability concerns and ethical dilemmas
  • The fundamental risks associated with current AI technology

According to Alliata, by addressing these risks, we can craft a future in which AI helps individuals better align their needs with potential opportunities and limitations of the new technology.

Industry Applications of AI

While AI has been in development for decades, Alliata describes a period known as the “AI winter” during which educators like herself studied AI technology, but hadn’t arrived at a point of practical applications. Contributing to this period of uncertainty were concerns over how to make AI profitable as well.

That all changed about 10-15 years ago when machine learning (ML) improved significantly. This development led to a surge in the creation of business applications for AI. Beginning with automation and robotics for repetitive tasks, the technology progressed to data analysis – taking a deep dive into data and finding not only new information but new opportunities as well.

This further developed into generative AI capable of completing creative tasks. Generative AI now produces around one billion words per day, compared to the one trillion produced by humans.

We are now at the stage where AI can complete complex tasks involving multiple steps. In her webinar, Alliata gave the example of a team creating storyboards and user pathways for a new app they wanted to develop. Using photos and rough images, they were able to use AI to generate the code for the app, saving hundreds of hours of manpower.

The next step in AI evolution is Artificial General Intelligence (AGI), an extremely autonomous level of AI that can replicate or in some cases exceed human intelligence. While the benefits of such technology may readily be obvious to some, the industry itself is divided as to not only whether this form of AI is close at hand or simply unachievable with current tools and technology, but also whether it should be developed at all.

This unpredictability, according to Alliata, represents both the excitement and the concerns about AI.

The AI Revolution and the Job Market

According to Alliata, the job market is the next area where the AI revolution can profoundly impact our lives.

To date, the AI revolution has not resulted in widespread layoffs as initially feared. Instead of making employees redundant, many jobs have evolved to allow them to work alongside AI. In fact, AI has also created new jobs such as AI prompt writer.

However, the prediction is that as AI becomes more sophisticated, it will need less human support, resulting in a greater job churn. Alliata shared statistics from various studies predicting as many as 27% of all jobs being at high risk of becoming redundant from AI and 40% of working hours being impacted by language learning models (LLMs) like Chat GPT.

Furthermore, AI may impact some roles and industries more than others. For example, one study suggests that in high-income countries, 8.5% of jobs held by women were likely to be impacted by potential automation, compared to just 3.9% of jobs held by men.

Is AI Sustainable?

While Alliata shared the many ways in which AI can potentially save businesses time and money, she also highlighted that it is an expensive technology in terms of sustainability.

Conducting AI training and processing puts a heavy strain on central processing units (CPUs), requiring a great deal of energy. According to estimates, Chat GPT 3 alone uses as much electricity per day as 121 U.S. households in an entire year. Gartner predicts that by 2030, AI could consume 3.5% of the world’s electricity.

To reduce the energy requirements, Alliata highlighted potential paths forward in terms of hardware optimization, such as more energy-efficient chips, greater use of renewable energy sources, and algorithm optimization. For example, models that can be applied to a variety of uses based on prompt engineering and parameter-efficient tuning are more energy-efficient than training models from scratch.

Risks of Using Generative AI

While Alliata is clearly an advocate for the benefits of AI, she also highlighted the risks associated with using generative AI, particularly LLMs.

  • Uncertainty – While we rely on AI for answers, we aren’t always sure that the answers provided are accurate.
  • Hallucinations – Technology designed to answer questions can make up facts when it does not know the answer.
  • Copyright – The training of LLMs often uses copyrighted data for training without permission from the creator.
  • Bias – Biased data often trains LLMs, and that bias becomes part of the LLM’s programming and production.
  • Vulnerability – Users can bypass the original functionality of an LLM and use it for a different purpose.
  • Ethical Risks – AI applications pose significant ethical risks, including the creation of deepfakes, the erosion of human creativity, and the aforementioned risks of unemployment.

Mitigating these risks relies on pillars of responsibility for using AI, including value alignment of the application, accountability, transparency, and explainability.

The last one, according to Alliata, is vital on a human level. Imagine you work for a bank using AI to assess loan applications. If a loan is denied, the explanation you give to the customer can’t simply be “Because the AI said so.” There needs to be firm and explainable data behind the reasoning.

OPIT’s Masters in Responsible Artificial Intelligence explores the risks and responsibilities inherent in AI, as well as others.

A Lucky Future

Despite the potential risks, Alliata concludes that AI presents even more opportunities and solutions in the future.

Information overload and decision fatigue are major challenges today. Imagine you want to buy a new car. You have a dozen features you desire, alongside hundreds of options, as well as thousands of websites containing the relevant information. AI can help you cut through the noise and narrow the information down to what you need based on your specific requirements.

Alliata also shared how AI is changing healthcare, allowing patients to understand their health data, make informed choices, and find healthcare professionals who meet their needs.

It is this functionality that can lead to the “lucky future.” Personalized guidance based on an analysis of vast amounts of data means that each person is more likely to make the right decision with the right information at the right time.

Read the article