The Magazine
Avatar
John Loewen

Professor @OPIT, PhD in Computer Science, Former Professor @Chulalongkorn University and @Vancouver Island University. Location: Italy Teaches Computer Architectures (BSc).

A Comprehensive Guide to Python for Data Science
Avatar
John Loewen
June 30, 2023 · min read

As one of the world’s fastest-growing industries, with a predicted compound annual growth rate of 16.43% anticipated between 2022 and 2030, data science is the ideal choice for your career. Jobs will be plentiful. Opportunities for career advancement will come thick and fast. And even at the most junior level, you’ll enjoy a salary that comfortably sits in the mid-five figures.


Studying for a career in this field involves learning the basics (and then the complexities) of programming languages including C+, Java, and Python. The latter is particularly important, both due to its popularity among programmers and the versatility that Python brings to the table. Here, we explore the importance of Python for data science and how you’re likely to use it in the real world.


Why Python for Data Science?


We can distill the reasons for learning Python for data science into the following five benefits.


Popularity and Community Support


Statista’s survey of the most widely-used programming languages in 2022 tells us that 48.07% of programmers use Python to some degree. Leftronic digs deeper into those numbers, telling us that there are 8.2 million Python developers in the world. As a prospective developer yourself, these numbers tell you two things – Python is in demand and there’s a huge community of fellow developers who can support you as you build your skills.


Easy to Learn and Use


You can think of Python as a primer for almost any other programming language, as it takes the fundamental concepts of programming and turns them into something practical. Getting to grips with concepts like functions and variables is simpler in Python than in many other languages. Python eventually opens up from its simplistic use cases to demonstrate enough complexity for use in many areas of data science.


Extensive Libraries and Tools


Given that Python was first introduced in 1991, it has over 30 years of support behind it. That, combined with its continued popularity, means that novice programmers can access a huge number of tools and libraries for their work. Libraries are especially important, as they act like repositories of functions and modules that save time by allowing you to benefit from other people’s work.


Integration With Other Programming Languages


The entire script for Python is written in C, meaning support for C is built into the language. While that enables easy integration between these particular languages, solutions exist to link Python with the likes of C++ and Java, with Python often being capable of serving as the “glue” that binds different languages together.


Versatility and Flexibility


If you can think it, you can usually do it in Python. Its clever modular structure, which allows you to define functions, modules, and entire scripts in different files to call as needed, makes Python one of the most flexible programming languages around.



Setting Up Python for Data Science


Installing Python onto your system of choice is simple enough. You can download the language from the Python.org website, with options available for everything from major operating systems (Windows, macOS, and Linux) to more obscure devices.


However, you need an integrated development environment (IDE) installed to start coding in Python. The following are three IDEs that are popular with those who use Python for data science:


  • Jupyter Notebook – As a web-based application, Jupyter easily allows you to code, configure your workflows, and even access various libraries that can enhance your Python code. Think of it like a one-stop shop for your Python needs, with extensions being available to extend its functionality. It’s also free, which is never a bad thing.
  • PyCharm – Where Jupyter is an open-source IDE for several languages, PyCharm is for Python only. Beyond serving as a coding tool, it offers automated code checking and completion, allowing you to quickly catch errors and write common code.
  • Visual Studio Code – Though Visual Studio Code alone isn’t compatible with Python, it has an extension that allows you to edit Python code on any operating system. Its “Linting” feature is great for catching errors in your code, and it comes with an integrated debugger that allows you to test executables without physically running them.

Setting up your Python virtual environment is as simple as downloading and installing Python itself, and then choosing an IDE in which to work. Think of Python as the materials you use to build a house, with your IDE being both the blueprint and the tools you’ll need to patch those materials together.


Essential Python Libraries for Data Science


Just as you’ll go to a real-world library to check out books, you can use Python libraries to “check out” code that you can use in your own programs. It’s actually better than that because you don’t need to return libraries when you’re done with them. You get to keep them, along with all of their built-in modules and functions, to call upon whenever you need them. In Python for data science, the following are some essential libraries:


  • NumPy – We spoke about integration earlier, and NumPy is ideal for that. It brings concepts of functionality from Fortran and C into Python. By expanding Python with powerful array and numerical computing tools, it helps transform it into a data science powerhouse.
  • pandas – Manipulating and analyzing data lies at the heart of data sciences, and pandas give you a library full of tools to allow both. It offers modules for cleaning data, plotting, finding correlations, and simply reading CSV and JSON files.
  • Matplotlib – Some people can look at reams of data and see patterns form within the numbers. Others need visualization tools, which is where Matplotlib excels. It helps you create interactive visual representations of your data for use in presentations or if you simply prefer to “see” your data rather than read it.
  • Scikit-learn – The emerging (some would say “exploding) field of machine learning is critical to the AI-driven future we’re seemingly heading toward. Scikit-learn is a library that offers tools for predictive data analysis, built on what’s available in the NumPy and Matplotlib libraries.
  • TensorFlow and Keras – Much like Scikit-learn, both TensorFlow and Keras offer rich libraries of tools related to machine learning. They’re essential if your data science projects take you into the realms of neural networks and deep learning.

Data Science Workflow in Python


A Python programmer without a workflow is like a ship’s captain without a compass. You can sail blindly onward, and you may even get lucky and reach your destination, but the odds are you’re going to get lost in the vastness of the programming sea. For those who want to use Python for data science, the following workflow brings structure and direction to your efforts.


Step 1 – Data Collection and Preprocessing


You need to collect, organize, and import your data into Python (as well as clean it) before you can draw any conclusions from it. That’s why the first step in any data science workflow is to prepare the data for use (hint – the pandas library is perfect for this task).


Step 2 – Exploratory Data Analysis (EDA)


Just because you have clean data, that doesn’t mean you’re ready to investigate what that data tells you. It’s like washing ingredients before you make a dish – you need to have a “recipe” that tells you how to put everything together. Data scientists use EDA as this recipe, allowing them to combine data visualization (remember – the Matplotlib library) with descriptive statistics that show them what they’re looking at.


Step 3 – Feature Engineering


This is where you dig into the “whats” and “hows” of your Python program. You’ll select features for the code, which define what it does with the data you import and how it’ll deliver outcomes. Scaling is a key part of this process, with scope creep (i.e., constantly adding features as you get deeper into a project) being the key thing to avoid.


Step 4 – Model Selection and Training


Decision trees, linear regression, logistic regression, neural networks, and support vector machines. These are all models (with their own algorithms) you can use for your data science project. This step is all about selecting the right model for the job (your intended features are important here) and training that model so it produces accurate outputs.


Step 5 – Model Evaluation and Optimization


Like a puppy that hasn’t been house trained, an unevaluated model isn’t ready for release into the real world. Classification metrics, such as a confusion matrix and classification report, help you to evaluate your model’s predictions against real-world results. You also need to tune the hyperparameters built into your model, similar to how a mechanic may tune the nuts and bolts in a car, to get everything working as efficiently as possible.


Step 6 – Deployment and Maintenance


You’ve officially deployed your Python for data science model when you release it into the wild and let it start predicting outcomes. But the work doesn’t end at deployment, as constant monitoring of what your model does, outputs, and predicts is needed to tell you if you need to make tweaks or if the model is going off the rails.


Real-World Data Science Projects in Python


There are many examples of Python for data science in the real world, some of which are simple while others delve into some pretty complex datasets. For instance, you can use a simple Python program to scrap live stock prices from a source like Yahoo! Finance, allowing you to create a virtual ticker of stock price changes for investors.


Alternatively, why not create a chatbot that uses natural language processing to classify and respond to text? For that project, you’ll tokenize sentences, essentially breaking them down into constituent words called “tokens,” and tag those tokens with meanings that you could use to prompt your program toward specific responses.


There are plenty of ideas to play around with, and Python is versatile enough to enable most, so consider what you’d like to do with your program and then go on the hunt for datasets. Great (and free) resources include The Boston House Price Dataset, ImageNet, and IMDB’s movie review database.



Try Python for Data Science Projects


By combining its own versatility with integrations and an ease of use that makes it welcoming to beginners, Python has become one of the world’s most popular programming languages. In this introduction to data science in Python, you’ve discovered some of the libraries that can help you to apply Python for data science. Plus, you have a workflow that lends structure to your efforts, as well as some ideas for projects to try. Experiment, play, and tweak models. Every minute you spend applying Python to data science is a minute spent learning a popular programming language in the context of a rapidly-growing industry.

Read the article
DBMS Architecture: A Comprehensive Guide to Database System Concepts
Avatar
John Loewen
June 30, 2023 · min read

Today’s tech-driven world is governed by data – so much so that nearly 98% of all organizations are increasing investment in data.


However, company owners can’t put their feet up after improving their data capabilities. They also need a database management system (DBMS) – a program specifically designed for storing and organizing information efficiently.


When analyzing a DBMS, you need to be thorough like a detective investigating a crime. One of the elements you want to consider is DBMS architecture. It describes the structure of your database and how individual bits of information are related to each other. The importance of DBMS architecture is enormous, as it helps IT experts design and maintain fully functional databases.


But what exactly does a DBMS architecture involve? You’ll find out in this entry. Coming up is an in-depth discussion of database system concepts and architecture.


Overview of DBMS Architecture


Suppose you’re assembling your PC. You can opt for several configurations, such as those with three RAM slots and dual-fan coolers. The same principle applies to DBMS architectures.


Two of the most common architectures are three-level and two-level architectures.


Three-Level Architecture


Three-level architecture is like teacher-parent communication. More often than not, a teacher communicates with parents through children, asking them to convey certain information. In other words, there are layers between the two that don’t allow direct communication.


The same holds for three-level architecture. But instead of just one layer, there are two layers between the database and user: application client and application server.


And as the name suggests, a three-level DBMS architecture has three levels:


  • External level – Also known as the view level, this section concerns the part of your database that’s relevant to the user. Everything else is hidden.
  • Conceptual level – Put yourself in the position of a scuba diver exploring the ocean layer by layer. Once you reach the external level, you go one segment lower and find the conceptual level. It describes information conceptually and tells you how data segments interact with one another.
  • Internal level – Another name for the internal level is the physical level. But what does it deal with? It mainly focuses on how data is stored in your system (e.g., using folders and files).

Two-Level Architecture


When you insert a USB into your PC, you can see the information on your interface. However, the source of the data is on the USB, meaning they’re separated.


Two-level architecture takes the same approach to separating data interface and data structure. Here are the two levels in this DBMS architecture:


  • User level – Any application and interface in your database are stored on the user level in a two-level DBMS architecture.
  • System level – The system level (aka server level) performs transaction management and other essential processes.

Comparison of the Two Architectures


Determining which architecture works best for your database is like buying a car. You need to consider how easy it is to use and the level of performance you can expect.


On the one hand, the biggest advantage of two-level architectures is that they’re relatively easy to set up. There’s just one layer between the database and the user, resulting in easier database management.


On the other hand, developing a three-level DBMS architecture may take a while since you need to include two layers between the database and the user. That said, three-level architectures are normally superior to two-level architectures due to higher flexibility and the ability to incorporate information from various sources.



Components of DBMS Architecture


You’ve scratched the surface of database system concepts and architecture, but don’t stop there. It’s time to move on to the basics to the most important elements of a DBMS architecture:


Data Storage


The fact that DBMS architectures have data storage solutions is carved in stone. What exactly are those solutions? The most common ones are as follows:


  • Data files – How many files do you have on your PC? If it’s a lot, you’re doing exactly what administrators of DBMS architectures are doing. A large number of them store data in files, and each file is categorized into blocks.
  • Indexes – You want your database operations to be like lightning bolts, i.e. super-fast. You can incorporate indexes to accomplish this goal. They point to data columns for quick retrieval.
  • Data dictionary – Also known as system logs, data dictionaries contain metadata – information about your data.

Data Manipulation


A large number of companies still utilize manual data management methods. But using this format is like shooting yourself in the foot when there are advanced data manipulation methods are available. These allow you to process and retrieve data within seconds through different techniques:


  • Query processor – Query processing refers to extracting data from your DBMS architecture. It operates like any other multi-stage process. It involves parsing, translation, optimization, and evaluation.
  • Query optimizer – A DBMS architecture administrator can perform various query optimization tasks to achieve desired results faster.
  • Execution engine – Whenever you want your architecture to do something, you send requests. But something needs to process the requests – that something is the execution engine.

Data Control


We’re continuing our journey through an average DBMS architecture. Our next stop is data control, which is comprised of these key elements:


  • Transaction management – When carrying out multiple transactions, how does the system prioritize one over another? The answer lies in transaction management, which is also about processing multiple transactions side by side.
  • Concurrency control – Database architecture is like an ocean teeming with life. Countless operations take place simultaneously. As a result, the system needs concurrency control to manage these concurrent tasks.
  • Recovery management – What if your DBMS architecture fails? Do you give up on your project? No – the system has robust recovery management tools to retrieve your information and reduce downtime.

Database System Concepts


To give you a better understanding of a DBMS architecture, let’s describe the most important concepts regarding this topic.


Data Models


Data models do to information what your folders do to files – organize them. There are four major types of data models:


  • Hierarchical model – Top-down and bottom-up storage solutions are known as hierarchical models. They’re characterized by tree-like structures.
  • Network model – Hierarchical models are generally used for basic data relationships. If you want to analyze complex relationships, you need to kick things up a notch with network models. They enable you to represent huge quantities of complex information without a hitch.
  • Relational model – Relations are merely tables with values. A relational model is a collection of these relations, indicating how data is connected to other data.
  • Object-oriented model – Programming languages regularly use objects. An object-oriented model stores information as models and is usually more complex than other models.

Database Schema and Instances


Another concept you should familiarize yourself with is schemas and instances.


  • Definition of schema and instance – Schemas are like summaries, providing a basic description of databases. Instances tell you what information is stored in a database.
  • Importance of schema in DBMS architecture – Schemas are essential because they help organize data by providing a clear outline.

Data Independence


The ability of other pieces of information to remain unaffected after you change one bit of data is known as data independence. What are the different types of data independence, and what makes them so important?


  • Logical data independence – If you can modify logical schemas without altering the rest of the system, your logical data is independent.
  • Physical data independence – Physical data is independent if it remains unaffected when changing your hardware, such as SSD disks.
  • Significance of data independence in DBMS architecture – Independent data is crucial for saving time in database management because it reduces the amount of information that needs to be processed.

Efficient Database Management Systems


Database management systems have a lot in common with other tech-based systems. For example, you won’t ignore problems that arise on your PC, be they CPU or graphics card issues. You’ll take action to optimize the performance of the device and solve those issues.


That’s exactly what 75% of developers and administrators of database management systems do. They go the extra mile to enhance the performance, scalability, flexibility, security, and integrity of their architecture.


Performance Optimization Techniques


  • Indexing – By pointing to certain data in tables, indexes speed up database management.
  • Query optimization – This process is about finding the most efficient method of executing queries.
  • Caching – Frequently accessed information is cached to accelerate retrieval.

Scalability and Flexibility


  • Horizontal scaling – Horizontal scaling involves increasing the number of servers.
  • Vertical scaling – An administrator can boost the performance of the server to make the system more scalable.
  • Distributed databases – Databases are like smartphones in that they can easily overload. Pressure can be alleviated with distributed databases, which store information in multiple locations.

Security and Integrity


  • Access control – Restricting access is key to preventing cyber security attacks.
  • Data encryption – Administrators often encrypt their DBMS architecture to protect sensitive information.
  • Backup and recovery – A robust backup plan helps IT experts recover from shutdowns and other unforeseen problems.

Preparing for the Future Is Critical


DBMS architecture is the underlying structure of a database management system. It consists of several elements, all of which work together to create a fully functional data infrastructure.


Understanding the basic elements of DBMS architecture is vital for IT professionals who want to be well-prepared for future changes, such as hybrid environments. As the old saying goes – success depends upon preparation.

Read the article
Computer Architecture Basics and Definitions: A Comprehensive Guide
Avatar
John Loewen
June 28, 2023 · min read

Computer architecture forms the backbone of computer science. So, it comes as no surprise it’s one of the most researched fields of computing.


But what is computer architecture, and why does it matter?


Basically, computer architecture dictates every aspect of a computer’s functioning, from how it stores data to what it displays on the interface. Not to mention how the hardware and software components connect and interact.


With this in mind, it isn’t difficult to realize the importance of this structure. In fact, computer scientists did this even before they knew what to call it. The first documented computer architecture can be traced back to 1936, 23 years before the term “architecture” was first used when describing a computer. Lyle R. Johnson, an IBM senior staff member, had this honor, realizing that the word organization just doesn’t cut it.


Now that you know why you should care about it, let’s define computer architecture in more detail and outline everything you need to know about it.


Basic Components of Computer Architecture


Computer architecture is an elaborate system where each component has its place and function. You’re probably familiar with some of the basic computer architecture components, such as the CPU and memory. But do you know how those components work together? If not, we’ve got you covered.


Central Processing Unit (CPU)


The central processing unit (CPU) is at the core of any computer architecture. This hardware component only needs instructions written as binary bits to control all its surrounding components.


Think of the CPU as the conductor in an orchestra. Without the conductor, the choir is still there, but they’re waiting for instructions.


Without a functioning CPU, the other components are still there, but there’s no computing.


That’s why the CPU’s components are so important.


Arithmetic Logic Unit (ALU)


Since the binary bits used as instructions by the CPU are numbers, the unit needs an arithmetic component to manipulate them.


That’s where the arithmetic logic unit, or ALU, comes into play.


The ALU is the one that receives the binary bits. Then, it performs an operation on one or more of them. The most common operations include addition, subtraction, AND, OR, and NOT.


Control Unit (CU)


As the name suggests, the control unit (CU) controls all the components of basic computer architecture. It transfers data to and from the ALU, thus dictating how each component behaves.


Registers


Registers are the storage units used by the CPU to hold the current data the ALU is manipulating. Each CPU has a limited number of these registers. For this reason, they can only store a limited amount of data temporarily.


Memory


Storing data is the main purpose of the memory of a computer system. The data in question can be instructions issued by the CPU or larger amounts of permanent data. Either way, a computer’s memory is never empty.


Traditionally, this component can be broken into primary and secondary storage.


Primary Memory


Primary memory occupies a central position in a computer system. It’s the only memory unit that can communicate with the CPU directly. It stores only programs and data currently in use.


There are two types of primary memory:


  • RAM (Random Access Memory). In computer architecture, this is equivalent to short-term memory. RAM helps start the computer and only stores data as long as the machine is on and data is being used.
  • ROM (Read Only Memory). ROM stores the data used to operate the system. Due to the importance of this data, the ROM stores information even when you turn off the computer.

Secondary Memory


With secondary memory, or auxiliary memory, there’s room for larger amounts of data (which is also permanent). However, this also means that this memory is significantly slower than its primary counterpart.


When it comes to secondary memory, there’s no shortage of choices. There are magnetic discs (hard disk drives (HDDs) and solid-state drives (SSDs)) that provide fast access to stored data. And let’s not forget about optical discs (CD-ROMs and DVDs) that offer portable data storage.


Input/Output (I/O) Devices


The input/output devices allow humans to communicate with a computer. They do so by delivering or receiving data as necessary.


You’re more than likely familiar with the most widely used input devices – the keyboard and the mouse. When it comes to output devices, it’s pretty much the same. The monitor and printer are at the forefront.


Buses


When the CPU wants to communicate with other internal components, it relies on buses.


Data buses are physical signal lines that carry data. Most computer systems use three of these lines:


  • Data bus – Transmitting data from the CPU to memory and I/O devices and vice versa
  • Address bus – Carrying the address that points to the location the CPU wants to access
  • Control bus – Transferring control from one component to the other

Types of Computer Architecture


There’s more than one type of computer architecture. These types mostly share the same base components. However, the setup of these components is what makes them differ.


Von Neumann Architecture


The Von Neumann architecture was proposed by one of the originators of computer architecture as a concept, John Von Neumann. Most modern computers follow this computer architecture.


The Von Neumann architecture has several distinguishing characteristics:


  • All instructions are carried out sequentially.
  • It doesn’t differentiate between data and instruction. They’re stored in the same memory unit.
  • The CPU performs one operation at a time.

Since data and instructions are located in the same place, fetching them is simple and efficient. These two adjectives can describe working with the Von Neumann architecture in general, making it such a popular choice.


Still, there are some disadvantages to keep in mind. For starters, the CPU is often idle since it can only access one bus at a time. If an error causes a mix-up between data and instructions, you can lose important data. Also, defective programs sometimes fail to release memory, causing your computer to crash.


Harvard Architecture


Harvard architecture was named after the famed university. Or, to be more precise, after an IBM computer called “Harvard Mark I” located at the university.


The main difference between this computer architecture and the Von Neumann model is that the Harvard architecture separates the data from the instructions. Accordingly, it allocates separate data, addresses, and control buses for the separate memories.


The biggest advantage of this setup is that the buses can fetch data concurrently, minimizing idle time. The separate buses also reduce the chance of data corruption.


However, this setup also requires a more complex architecture that can be challenging to develop and implement.


Modified Harvard Architecture


Today, only specialty computers use the pure form of Harvard architecture. As for other machines, a modified Harvard architecture does the trick. These modifications aim to soften the rigid separation between data and instructions.


RISC and CISC Architectures


When it comes to processor architecture, there are two primary approaches.


The CISC (Complex Instruction Set Computer) processors have a single processing unit and are pretty straightforward. They tackle one task at a time. As a result, they use less memory. However, they also need more time to complete an instruction.


Over time, the speed of these processors became a problem. This led to a processor redesign, resulting in the RISC architecture.


The new and improved RISC (Reduced Instruction Set Computer) processors feature larger registers and keep frequently used variables within the processor. Thanks to these handy functionalities, they can operate much more quickly.


Instruction Set Architecture (ISA)


Instruction set architecture (ISA) defines the instructions that the processor can read and act upon. This means ISA decides which software can be installed on a particular processor and how efficiently it can perform tasks.


There are three types of instruction set architecture. These types differ based on the placement of instructions, and their names are pretty self-explanatory. For stack-based ISA, the instructions are placed in the stack, a memory unit within the address register. The same principle applies for accumulator-based ISA (a type of register in the CPU) and register-based ISA (multiple registers within the system).


The register-based ISA is most commonly used in modern machines. You’ve probably heard of some of the most popular examples. For CISC architecture, there are x86 and MC68000. As for RISC, SPARC, MIPS, and ARM stand out.


Pipelining and Parallelism in Computer Architecture


In computer architecture, pipelining and parallelism are methods used to speed up processing.


Pipelining refers to overlapping multiple instructions and processing them simultaneously. This couldn’t be possible without a pipeline-like structure. Imagine a factory assembly line, and you’ll understand how pipelining works instantly.


This method significantly increases the number of processed instructions and comes in two types:


  • Instruction pipelines – Used for fixed-point multiplication, floating-point operations, and similar calculations
  • Arithmetic pipelines – Used for reading consecutive instructions from memory

Parallelism entails using multiple processors or cores to process data simultaneously. Thanks to this collaborative approach, large amounts of data can be processed quickly.


Computer architecture employs two types of parallelism:


  • Data parallelism – Executing the same task with multiple cores and different sets of data
  • Task parallelism – Performing different tasks with multiple cores and the same or different data

Multicore processors are crucial for increasing the efficiency of parallelism as a method.


Memory Hierarchy and Cache


In computer system architecture, memory hierarchy is essential for minimizing the time it takes to access the memory units. It refers to separating memory units based on their response times.


The most common memory hierarchy goes as follows:


  • Level 1: Processor registers
  • Level 2: Cache memory
  • Level 3: Primary memory
  • Level 4: Secondary memory

The cache memory is a small and fast memory located close to a processor core. The CPU uses it to reduce the time and energy needed to access data from the primary memory.


Cache memory can be further broken into levels.


  • L1 cache (the primary cache) – The fastest cache unit in the system
  • L2 cache (the secondary cache) – The slower but more spacious option than Level 1
  • L3 cache (a specialized cache) – The largest and the slowest cache in the system used to improve the performance of the first two levels

When it comes to determining where the data will be stored in the cache memory, three mapping techniques are employed:


  • Direct mapping – Each memory block is mapped to one pre-determined cache location
  • Associative mapping – Each memory block is mapped to a single location, but it can be any location
  • Set associative mapping – Each memory block is mapped to a subset of locations

The performance of cache memory directly impacts the overall performance of a computing system. The following cache replacement policies are used to better process big data applications:


  • FIFO (first in, first out) ­– The memory block first to enter the primary memory gets replaced first
  • LRU (least recently used) – The least recently used page is the first to be discarded
  • LFU (least frequently used) – The least frequently used element gets eliminated first

Input/Output (I/O) Systems


The input/output or I/O systems are designed to receive and send data to a computer. Without these processing systems, the computer wouldn’t be able to communicate with people and other systems and devices.


There are several types of I/O systems:


  • Programmed I/O – The CPU directly issues a command to the I/O module and waits for it to be executed
  • Interrupt-Driven I/O – The CPU moves on to other tasks after issuing a command to the I/O system
  • Direct Memory Access (DMA) – The data is transferred between the memory and I/O devices without passing through the CPU

There are three standard I/O interfaces used for physically connecting hardware devices to a computer:


  • Peripheral Component Interconnect (PCI)
  • Small Computer System Interface (SATA)
  • Universal Serial Bus (USB)

Power Consumption and Performance in Computer Architecture


Power consumption has become one of the most important considerations when designing modern computer architecture. Failing to consider this aspect leads to power dissipation. This, in turn, results in higher operating costs and a shorter lifespan for the machine.


For this reason, the following techniques for reducing power consumption are of utmost importance:


  • Dynamic Voltage and Frequency Scaling (DVFS) – Scaling down the voltage based on the required performance
  • Clock gating – Shutting off the clock signal when the circuit isn’t in use
  • Power gating – Shutting off the power to circuit blocks when they’re not in use

Besides power consumption, performance is another crucial consideration in computer architecture. The performance is measured as follows:


  • Instructions per second (IPS) – Measuring efficiency at any clock frequency
  • Floating-point operations per second (FLOPS) – Measuring the numerical computing performance
  • Benchmarks – Measuring how long the computer takes to complete a series of test programs

Emerging Trends in Computer Architecture


Computer architecture is continuously evolving to meet modern computing needs. Keep your eye out on these fascinating trends:


  • Quantum computing (relying on the laws of quantum mechanics to tackle complex computing problems)
  • Neuromorphic computing (modeling the computer architecture components on the human brain)
  • Optical computing (using photons instead of electrons in digital computation for higher performance)
  • 3D chip stacking (using 3D instead of 2D chips as they’re faster, take up less space, and require less power)

A One-Way Ticket to Computing Excellence


As you can tell, computer architecture directly affects your computer’s speed and performance. This launches it to the top of priorities when building this machine.


High-performance computers might’ve been nice-to-haves at some point. But in today’s digital age, they’ve undoubtedly become a need rather than a want.


In trying to keep up with this ever-changing landscape, computer architecture is continuously evolving. The end goal is to develop an ideal system in terms of speed, memory, and interconnection of components.


And judging by the current dominant trends in this field, that ideal system is right around the corner!

Read the article
A Closer Look at the Difference Between DBMS and RDBMS
Avatar
John Loewen
June 27, 2023 · min read

Thanks to many technological marvels of our era, we’ve moved from writing important documents using pen and paper to storing them digitally.


Database systems emerged as the amount and complexity of information we need to keep have increased significantly in the last decades. They represent virtual warehouses for storing documents. Database management systems (DBMS) and relational database management systems (RDBMS) were born out of a burning need to easily control, organize, and edit databases.


Both DBMS and RDBMS represent programs for managing databases. But besides the one letter in the acronym, the two terms differ in several important aspects.


Here, we’ll outline the difference between DBMS and RDBMS, help you learn the ins and outs of both, and choose the most appropriate one.


Definition of DBMS (Database Management Systems)


While working for General Electric during the 1960s, Charles W. Bachman recognized the importance of proper document management and found that the solutions available at the time weren’t good enough. He did his research and came up with a database management system, a program that made storing, editing, and retrieving files a breeze. Unknowingly, Bachman revolutionized the industry and offered the world a convenient database management solution with amazing properties.


Key Features


Over the years, DBMSs have become powerful beasts that allow you to enhance performance and efficiency, save time, and handle huge amounts of data with ease.


One of the key features of DBMSs is that they store information as files in one of two forms: hierarchical or navigational. When managing data, users can use one of several manipulation functions the systems offer:


  • Inserting data
  • Deleting data
  • Updating data

DBMSs are simple structures ideal for smaller companies that don’t deal with huge amounts of data. Only a single user can handle information, which can be a deal-breaker for larger entities.


Although fairly simple, DBMSs bring a lot to the table. They allow you to access, edit, and share data in the blink of an eye. Moreover, DBMSs let you unify your team and have accurate and reliable information on the record, ensuring nobody is left out. They also help you stay compliant with different security and privacy regulations and lower the risk of violations. Finally, having an efficient database management system leads to wiser decision-making that can ultimately save you a lot of time and money.


Examples of Popular DBMS Software


When DBMSs were just becoming a thing, you had software like Clipper and FoxPro. Today, the most popular (and simplest) examples of DBMS software are XML, Windows Registry, and file systems.



Definition of RDBMS (Relational Database Management Systems)


Not long after DBMS came into being, people recognized the need to keep data in the form of tables. They figured storing info in rows (tuples) and columns (attributes) allows a clearer view and easier navigation and information retrieval. This idea led to the birth of relational database management systems (RDBMS) in the 1970s.


Key Features


As mentioned, the only way RDBMSs store information is in the form of tables. Many love this feature because it makes organizing and classifying data according to different criteria a piece of cake. Many companies that use RDBMSs utilize multiple tables to store their data, and sometimes, the information in them can overlap. Fortunately, RDBMSs allow relating data from various tables to one another (hence the name). Thanks to this, you’ll have no trouble adding the necessary info in the right tables and moving it around as necessary.


Since you can relate different pieces of information from your tables to each other, you can achieve normalization. However, normalization isn’t the process of making your table normal. It’s a way of organizing information to remove redundancy and enhance data integrity.


In this technological day and age, we see data growing exponentially. If you’re working with RDBMSs, there’s no need to be concerned. The systems can handle vast amounts of information and offer exceptional speed and total control. Best of all, multiple users can access RDBMSs at a time and enhance your team’s efficiency, productivity, and collaboration.


Simply put, an RDBMS is a more advanced, powerful, and versatile version of DBMS. It offers speed, plenty of convenient features, and ease of use.


Examples of Popular RDBMS Software


As more and more companies recognize the advantages of using RDBMS, the availability of software grows by the day. Those who have tried several options agree that Oracle and MySQL are among the best choices.


Key Differences Between DBMS and RDBMS


Now that you’ve learned more about DBMS and RDBMS, you probably have an idea of the most significant differences between them. Here, we’ll summarize the key DBMS vs. RDBMS differences.


Data Storage and Organization


The first DBMS and RDBMS difference we’ll analyze is the way in which the systems store and organize information. With DBMS, data is stored and organized as files. This system uses either a hierarchical or navigational form to arrange the information. With DBMS, you can access only one element at a time, which can lead to slower processing.


On the other hand, RDBMS uses tables to store and display information. The data featured in several tables can be related to each other for ease of use and better organization. If you want to access multiple elements at the same time, you can; there are no constraints regarding this, as opposed to DBMS.


Data Integrity and Consistency


When discussing data integrity and consistency, it’s necessary to explain the concept of constraints in DBMS and RDBMS. Constraints are sets of “criteria” applied to data and/or operations within a system. When constraints are in place, only specific types of information can be displayed, and only specific operations can be completed. Sounds restricting, doesn’t it? The entire idea behind constraints is to enhance the integrity, consistency, and correctness of data displayed within a database.


DBMS lacks constraints. Hence, there’s no guarantee the data within this system is consistent or correct. Since there are no constraints, the risk of errors is higher.


RDBMS have constraints, resulting in the reliability and integrity of the data. Plus, normalization (removing redundancies) is another option that contributes to data integrity in RDBMS. Unfortunately, normalization can’t be achieved in DBMS.


Query Language and Data Manipulation


DBMS uses multiple query languages to manipulate data. However, none of these languages offer the speed and convenience present in RDBMS.


RDBMS manipulates data with structured query language (SQL). This language lets you retrieve, create, insert, or drop data within your relational database without difficulty.


Scalability and Performance


If you have a small company and/or don’t need to deal with vast amounts of data, a DBMS can be the way to go. But keep in mind that a DBMS can only be accessed by one person at a time. Plus, there’s no option to access more than one element at once.


With RDBMSs, scalability and performance are moved to a new level. An RDBMS can handle large amounts of information in a jiff. It also supports multiple users and allows you to access several elements simultaneously, thus enhancing your efficiency. This makes RDBMSs excellent for larger companies that work with large quantities of data.


Security and Access Control


Last but not least, an important difference between DBMS and RDBMS lies in security and access control. DBMSs have basic security features. Therefore, there’s a higher chance of breaches and data theft.


RDBMSs have various security measures in place that keep your data safe at all times.


Choosing the Right Database Management System


The first criterion that will help you make the right call is your project’s size and complexity. Small projects with relatively simple data are ideal for DBMSs. But if you’re tackling a lot of complex data, RDBMSs are the logical option.


Next, consider your budget and resources. Since they’re simpler, DBMSs are more affordable, in both aspects. RDBMSs are more complex, so naturally, the price of software is higher.


Finally, the factor that affects what option is the best for you is the desired functionality. What do you want from the program? Is it robust features or a simple environment with a few basic options? Your answer will guide you in the right direction.


Pros and Cons of DBMS and RDBMS


DBMS


Pros:


  • Doesn’t involve complex query processing
  • Cost-effective solution
  • Ideal for processing small data
  • Easy data handling via basic SQL queries

Cons:


  • Doesn’t allow accessing multiple elements at once
  • No way to relate data
  • Doesn’t inherently support normalization
  • Higher risk of security breaches
  • Single-user system

RDBMS


Pros:


  • Advanced, robust, and well-organized
  • Ideal for large quantities of information
  • Data from multiple tables can be related
  • Multi-user system
  • Supports normalization

Cons:


  • More expensive
  • Complex for some people

Examples of Use Cases


DBMS


DBMS is used in many sectors where more basic storing and management of data is required, be it sales and marketing, education, banking, or online shopping. For instance, universities use DBMS to store student-related data, such as registration details, fees paid, attendance, exam results, etc. Libraries use it to manage the records of thousands of books.


RDBMS


RDBMS is used in many industries today, especially those continuously requiring processing and storing large volumes of data. For instance, Airline companies utilize RDBMS for passenger and flight-related information and schedules. Human Resource departments use RDBMS to store and manage information related to employees and their payroll statistics. Manufacturers around the globe use RDBMS for operational data, inventory management and supply chain information.


Choose the Best Solution


An RDBM is a more advanced and powerful younger sibling of a DBMS. While the former offers more features, convenience, and the freedom to manipulate data as you please, it isn’t always the right solution. When deciding which road to take, prioritize your needs.

Read the article
Everything You Need to Know About Keys in DBMS
Avatar
John Loewen
June 26, 2023 · min read

In a database, you have entities (which have attributes), and relationships between those entities. Managing them is key to preventing chaos from engulfing your database, which is where the concept of keys comes in. These unique identifiers enable you to pick specific rows in an entity set, as well as define their relationships to rows in other entity sets, allowing your database to handle complex computations.


Let’s explore keys in DBMS (database management systems) in more detail, before digging into everything you need to know about the most important keys – primary keys.


Understanding Keys in DBMS


Keys in DBMS are attributes that you use to identify specific rows inside a table, in addition to finding the relation between two tables. For example, let’s say you have a table for students, with that table recording each student’s “ID Number,” “Name,” “Address,” and “Teacher” as attributes. If you want to identify a specific student in the table, you’ll need to use one of these attributes as a key that allows you to pull the student’s record from your database. In this case “ID Number” is likely the best choice because it’s a unique attribute that only applies to a single student.


Types of Keys in DBMS


Beyond the basics of serving as unique identifiers for rows in a database, keys in DBMS can take several forms:


  • Primary Keys – An attribute that is present in the table for all of the records it contains, with each instance of that attribute being unique to the record. The previously-mentioned “ID Number” for students is a great example, as no student can have the same number as another student.
  • Foreign Key – Foreign keys allow you to define and establish relationships between a pair of tables. If Table A needs to refer to the primary key in Table B, you’ll use a foreign key in Table A so you have values in that table to match those in Table B.
  • Unique Key – These are very similar to primary keys in that both contain unique identifiers for the records in a table. The only difference is that a unique key can contain a null value, whereas a primary key can’t.
  • Candidate Key – Though you may have picked a unique attribute to serve as your primary key, there may be other candidates within a table. Coming back to the student example, you may record the phone numbers and email addresses of your students, which can be as unique as the student ID assigned to the individual. These candidate keys are also unique identifiers, allowing them to be used in tandem with a primary key to identify a specific row in a table.
  • Composite Key – If you have attributes that wouldn’t be unique when taken alone, but can be combined to form a unique identifier for a record, you have a composite key.
  • Super Key – This term refers to the collection of attributes that uniquely identify a record, meaning it’s a combination of candidate keys. Just like an employer sifting through job candidates to find the perfect person, you’ll sift through your super key set to choose the ideal primary key amongst your candidate keys.

So, why are keys in DBMS so important?


Keys ensure you maintain data integrity across all of the tables that make up your database. Without them, the relationships between each table become messy hodgepodges, creating the potential for duplicate records and errors that deliver inaccurate reports from the database. Having unique identifiers (in the form of keys) allows you to be certain that any record you pull, and the relationships that apply to that record, are accurate and unrepeated.



Primary Key Essentials


As mentioned, any unique attribute in a table can serve as a primary key, though this doesn’t mean that every unique attribute is a great choice. The following characteristics help you to define the perfect primary key.


Uniqueness


If your primary key is repeatable across records, it can’t serve as a unique identifier for a single record. For example, our student table may have multiple people named “John,” so you can’t use the “Name” attribute to find a specific student. You need something unique to that student, such as the previously mentioned ID number.


Non-Null Values


Primary keys must always contain a value, else you risk losing records in a table because you have no way of calling upon them. This need for non-null values can be used to eliminate some candidates from primary key content. For instance, it’s feasible (though unlikely) that a student won’t have an email address, creating the potential for null values that mean the email address attribute can’t be a primary key.


Immutability


A primary key that can change over time is a key that can cause confusion. Immutability is the term used for any attribute that’s unchanging to the point where it’s an evergreen attribute that you can use to identify a specific record forever.


Minimal


Ideally, one table should have one attribute that serves as its primary key, which is where the term “minimal” comes in. It’s possible for a table to have a composite or super key set, though both create the possibility of confusion and data integrity issues.


The Importance of a Primary Key in DBMS


We can distill the reason why having a primary key in DBMS for each of your tables is important into the following reasons:


  • You can use a primary key to identify each unique record in a table, meaning no multi-result returns to your database searches.
  • Having a primary key means a record can’t be repeated in the table.
  • Primary keys make data retrieval more efficient because you can use a single attribute for searches rather than multiple.

Functions of Primary Keys


Primary keys in DBMS serve several functions, each of which is critical to your DBMS.


Data Identification


Imagine walking into a crowded room and shouting out a name. The odds are that several people (all of whom have the same name) will turn their heads to look at you. That’s basically what you’re doing if you try to pull records from a table without using a primary key.


A primary key in DBMS serves as a unique identifier that you can use to pull specific records. Coming back to the student example mentioned earlier, a “Student ID” is only applicable to a single student, making it a unique identifier you can use to find that student in your database.


Ensure Data Integrity


Primary keys protect data integrity in two ways.


First, they prevent duplicate records from building up inside a single table, ensuring you don’t get multiple instances of the same record. Second, they ensure referential integrity, which is the term used to describe what happens when one table in your database needs to refer to the records stored in another table.


For example, let’s say you have tables for “Students” and “Teachers” in your database. The primary keys assigned to your students and teachers allow you to pull individual records as needed from each table. But every “Teacher” has multiple “Students” in their class. So, your primary key from the “Students” table is used as a foreign key in the “Teachers” table, allowing you to denote the one-to-many relationship between a teacher and their class of students. That foreign key also ensures referential integrity because it contains the unique identifiers for students, which you can look up in your “Students” table.


Data Retrieval


If you need to pull a specific record from a table, you can’t rely on attributes that can repeat across several records in that table. Again, the “Name” example highlights the problem here, as several people could have the same name. You need a unique identifier for each record so you can retrieve a single record from a huge set without having to pore through hundreds (or even thousands) of records.


Best Practices for Primary Key Selection


Now that you understand how primary keys in DBMS work, here are some best practices for selecting the right primary key for your table:


  • Choose Appropriate Attributes as Candidates – If the attribute isn’t unique to each record, or it can contain a null value (as is the case with email addresses and phone numbers), it’s not a good candidate for a primary key.
  • Avoid Using Sensitive Information – Using personal or sensitive information as a primary key creates a security risk because anybody who cracks your database could use that information for other purposes. Make your primary keys unique, and only applicable, to your database, which allows you to encrypt any sensitive information stored in your tables.
  • Consider Surrogate Keys – Some tables don’t have natural attributes that you can use as primary keys. In these cases, you can create a primary key out of thin air and assign it to each record. The “Student ID” referenced earlier is a great example, as students entering a school don’t come with their own ID numbers. Those numbers are given to the student (or simply used in the database that collects their data), making them surrogate keys.
  • Ensure Primary Key Stability – Any attribute that can change isn’t suitable for use as a primary key because it causes stability issues. Names, email addresses, phone numbers, and even bank account details are all things that can change, making them unsuitable. Evergreen and unchanging is the way to go with primary keys.

Choose the Right Keys for Your Database


You need to understand the importance of a primary key in DBMS (or multiple primary keys when you have several tables) so you can define the relationships between tables and identify unique records inside your tables. Without primary keys, you’ll find it much harder to run reports because you won’t feel confident in the accuracy of the data returned. Each search may pull up duplicate or incorrect records because of a lack of unique identifiers.


Thankfully, many of the tables you create will have attributes that lend themselves well to primary key status. And even when that isn’t the case, you can use surrogate keys in DBMS to assign primary keys to your tables. Experiment with your databases, testing different potential primary keys to see what works best for you.

Read the article
A Comprehensive Guide to Understanding ER Diagrams in DBMS
Avatar
John Loewen
June 25, 2023 · min read

An ER diagram in DBMS (database management systems) is a lot like a storyboard for an animated TV show – it’s a collection of diagrams that show how everything fits together. Where a storyboard demonstrates the flow from one scene to the next, an ER diagram highlights the components of your databases and the relationships they share.


Understanding the ER model in DBMS is the first step to getting to grips with basic database software (like Microsoft Access) and more complex database-centric programming languages, such as SQL. This article explores ER diagrams in detail.


ER Model in DBMS


An ER diagram in DBMS is a tangible representation of the tables in a database, the relationships between each of those tables, and the attributes of each table. These diagrams feature three core components:


  • Entities – Represented by rectangles in the diagram, entities are objects or concepts used throughout your database.
  • Attributes – These are the properties that each entity possesses. ER diagrams use ellipses to represent attributes, with the attributes themselves tending to be the fields in a table. For example, an entity for students in a school’s internal database may have attributes for student names, birthdays, and unique identification numbers.
  • Relationships – No entity in an ER diagram is an island, as each is linked to at least one other. These relationships can take multiple forms, with said relationships dictating the flow of information through the database.

Mapping out your proposed database using the ER model is essential because it gives you a visual representation of how the database works before you start coding or creating. Think of it like the blueprint you’d use to build a house, with that blueprint telling you where you need to lay every brick and fit every door.


Entities in DBMS


An Entity in DBMS tends to represent a real-life thing (like the students mentioned previously) that you can identify with certain types of data. Each entity is distinguishable from the others in your database, meaning you won’t have multiple entities listing student details.


Entities come in two flavors:


  • Tangible Entities – These are physical things that exist in the real world, such as a person, vehicle, or building.
  • Intangible Entities – If you can see and feel an entity, it’s intangible. Bank accounts are good examples. We know they exist (and have data attributed to them) but we can’t physically touch them.

There are also different entity strengths to consider:


  • Strong Entities – A strong entity is represented using a rectangle and will have at least one key attribute attached to it that allows you to identify it uniquely. In the student example we’ve already shared, a student’s ID number could be a unique identifier, creating a key attribute that leads to the “Student” entity being strong.
  • Weak Entities – Weak entities have no unique identifiers, meaning you can’t use them alone. Represented using double-outlined rectangles, these entities rely on the existence of strong entities to exist themselves. Think of it like the relationship between parent and child. A child can’t exist without a parent, in the same way that a weak entity can’t exist without a strong entity.

Once you’ve established what your entities are, you’ll gather each specific type of entity into an entity set. This set is like a table that contains the data for each entity in a uniform manner. Returning to the student example, any entity that has a student ID number, name, and birthdate, may be placed into an overarching “Student” entity set. They’re basically containers for specific entity types.



Attributes in DBMS


Every entity you establish has attributes attached to it, as you’ve already seen with the student example used previously. These attributes offer details about various aspects of the entity and come in four types:


  • Simple Attributes – A simple attribute is any attribute that you can’t break down into further categories. A student ID number is a good example, as this isn’t something you can expand upon.
  • Composite Attributes – Composite attributes are those that may have other attributes attached to them. If “Name” is one of your attributes, its composites could be “First Name,” “Surname,” “Maiden Name,” and “Nickname.”
  • Derived Attributes – If you can derive an attribute from another attribute, it falls into this category. For instance, you can use a student’s date of birth to derive their age and grade level. These attributes have dotted ellipses surrounding them.
  • Multi-valued Attributes – Represented by dual-ellipses, these attributes cover anything that can have multiple values. Phone numbers are good examples, as people can have several cell phone or landline numbers.

Attributes are important when creating an ER model in DBMS because they show you what types of data you’ll use to populate your entities.


Relationships in DBMS


As your database becomes more complex, you’ll create several entities and entity sets, with each having relationships with others. You represent these relationships using lines, creating a network of entities with line-based descriptions telling you how information flows between them.


There are three types of relationships for an ER diagram in DBMS:


  • One-to-One Relationships – You’ll use this relationship when one entity can only have one of another entity. For example, if a school issues ID cards to its students, it’s likely that each student can only have one card. Thus, you have a one-to-one relationship between the student and ID card entities.
  • One-to-Many Relationships – This relationship type is for when one entity can have several of another entity, but the relationship doesn’t work in reverse. Bank accounts are a good example, as a customer can have several bank accounts, but each account is only accessible to one customer.
  • Many-to-Many Relationships – You use these relationships to denote when two entities can have several of each other. Returning to the student example, a student will have multiple classes, with each class containing several students, creating a many-to-many relationship.

These relationships are further broken down into “relationship sets,” which bring together all of the entities that participate in the same type of relationship. These sets have three varieties:


  • Unary – Only one entity participates in the relationship.
  • Binary – Two entities are in the relationship, such as the student and course example mentioned earlier.
  • n-ary – Multiple entities participate in the relationship, with “n” being the number of entities.

Your ER diagram in DBMS needs relationships to show how each entity set relates to (and interacts with) the others in your diagram.


ER Diagram Notations


You’ll use various forms of notation to denote the entities, attributes, relationships, and the cardinality of those relationships in your ER diagram.


Entity Notations


Entities are denoted using rectangles around a word or phrase, with a solid rectangle meaning a strong entity and a double-outlined rectangle denoting a weak entity.


Attribute Notations


Ellipses are the shapes of choice for attributes, with the following uses for each attribute type:


  • Simple and Composite Attribute – Solid line ellipses
  • Derived Attribute – Dotted line ellipses
  • Multi-Valued Attribute – Double-lined ellipses

Relationship Notations


Relationship notation uses diamonds, with a solid line diamond depicting a relationship between two attributes. You may also find double-lined diamonds, which signify the relationship between a weak entity and the strong entity that owns it.


Cardinality and Modality Notations


These lines show you the maximum times an instance in one entity set can relate to the instances of another set, making them crucial for denoting the relationships inside your database.


The endpoint of the line tells you everything you need to know about cardinality and ordinality. For example, a line that ends with three lines (two going diagonally) signifies a “many” cardinality, while a line that concludes with a small vertical line signifies a “one” cardinality. Modality comes into play if there’s a minimum number of instances for an entity type. For example, a person can have many phone numbers but must have at least one.


Steps to Create an ER Diagram in DBMS


With the various notations for an ER diagram in DBMS explained, you can follow these steps to draw your own diagram:


  • Identify Entities – Every tangible and intangible object that relates to your database is an entity that you need to identify and define.
  • Identify Attributes – Each entity has a set of attributes (students have names, ID numbers, birthdates, etc.) that you must define.
  • Identify Relationships – Ask yourself how each entity set fits together to identify the relationships that exist between them.
  • Assign Cardinality and Modality – If you have an instance from Entity A, how many instances does it relate to in Entity B? Is there a minimum to consider? Assign cardinalities and modalities to offer the answers.
  • Finalize Your Diagram – Take a final pass over the diagram to ensure all required entities are present, they have the appropriate attributes, and that all relationships are defined.

Examples of ER Diagrams in DBMS


Once you understand the basics of the ER model in DBMS, you’ll see how they can apply to multiple scenarios:


  • University Databases – A university database will have entities such as “Student,” “Teacher,” “Course,” and “Class.” Attributes depend on the entity, with the people-based entities having attributes including names, dates of birth, and ID numbers. Relationships vary (i.e., a student may only have one teacher but a single teacher may have several students).
  • Hospital Management Databases – Entities for this type of database include people (“Patients,” “Doctors,” and “Nurses”), as well as other tangibles, such as different hospital buildings and inventory. These databases can get very complex, with multiple relationships linking the various people involved to different buildings, treatment areas, and inventory.
  • E-Commerce Databases – People play an important role in the entities for e-commerce sites, too, because every site needs a list of customers. Those customers have payment details and order histories, which are potential entities or attributes. Product lists and available inventory are also factors.

Master the ER Model in DBMS


An ER diagram in DBMS can look like a complicated mass of shapes and lines at first, making them feel impenetrable to those new to databases. But once you get to grips with what each type of shape and line represents, they become crucial tools to help you outline your databases before you start developing them.


Application of what you’ve learned is the key to success with ER diagrams (and any other topic), so take what you’ve learned here and start experimenting. Consider real-world scenarios (such as those introduced above) and draw diagrams based on the entities you believe apply to those scenarios. Build up from there to figure out the attributes and relationships between entity sets and you’re well on your way to a good ER diagram.

Read the article
What Is Normalization in DBMS? A Comprehensive Overview
Avatar
John Loewen
June 24, 2023 · min read

The larger your database, the higher the possibility of data repetition and inaccuracies that compromise the results you pull from the database. Normalization in DBMS exists to counteract those problems by helping you to create more uniform databases in which redundancies are less likely to occur.


Mastering normalization is a key skill in DBMS for the simple fact that an error-strewn database is of no use to an organization. For example, a retailer that has to deal with a database that has multiple entries for phone numbers and email addresses is a retailer that can’t see as effectively as one that has a simple route to the customer. Let’s look at normalization in DBMS and how it helps you to create a more organized database.


The Concept of Normalization


Grab a pack of playing cards and throw them onto the floor. Now, pick up the “Jack of Hearts.” It’s a tough task because the cards are strewn all over the place. Some are facing down and there’s no rhyme, reason, or pattern to how the cards lie, meaning you’re going to have to check every card individually to find the one you want.


That little experiment shows you how critical organization is, even with a small set of “data.” It also highlights the importance of normalization in DBMS. Through normalization, you implement organizational controls using a set of principles designed to achieve the following:


  • Eliminate redundancy – Lower (or eliminate) occurrences of data repeating across different tables, or inside individual tables, in your DBMS.
  • Minimize data anomalies – Better organization makes it easier to spot datasets that don’t fit the “norm,” meaning fewer anomalies.
  • Improve data integrity – More accurate data comes from normalization controls. Database users can feel more confident in their results because they know that the controls ensure integrity.

The Process of Normalization


If normalization in DBMS is all about organization, it stands to reason that they would be a set process to follow when normalizing your tables and database:


  1. Decompose your tables – Break every table down into its various parts, which may lead to you creating several tables out of one. Through decomposition, you separate different datasets, eliminate inconsistencies, and set the stage for creating relationships and dependencies between tables.
  2. Identify functional dependencies – An attribute in one table may be dependent on another to exist. For example, a “Customer ID” number in a retailer’s “Customer” table is functionally dependent on the “Customer Name” field because the ID can’t exist without the customer. Identifying these types of dependencies ensures you don’t end up with empty records (such as a record with a “Customer ID” and no customer attached to it).
  3. Apply normalization rules – Once you’re broken down your table and identified the functional dependencies, you apply relevant normalization rules. You’ll use Normal Forms to do this, with the six highlighted below each having its own rules, structures, and use cases.

Normal Forms in DBMS


There isn’t a “single” way to achieve normalization in DBMS because every database (and the tables it contains) is different. Instead, there are six normal forms you may use, with each having its own rules that you need to understand to figure out which to apply.


First Normal Form (1NF)


If a relation can’t contain multiple values, it’s in 1NF. In other words, each attribute in the table can only contain a single (called “atomic”) value.


Example


If a retailer wants to store the details of its customers, it may have attributes in its table like “Customer Name,” “Phone Number,” and “Email Address.” By applying 1NF to this table, you ensure that the attributes that could contain multiple entries (“Phone Number” and “Email Address”) only contain one, making contacting that customer much simpler.


Second Normal Form (2NF)


A table that’s in 2NF is in 1NF, with the additional condition that none of its non-prime attributes depend on a subset of candidate keys within the table.


Example


Let’s say an employer wants to create a table that contains information about an employee, the skills they have, and their age. An employee may have multiple skills, leading to multiple records for the same employee in the table, with each denoting a skill while the ID number and age of the employee repeat for each record.


In this table, you’ve achieved 1NF because each attribute has an atomic value. However, the employee’s age is dependent on the employee ID number. To achieve 2NF, you’d break this table down into two tables. The first will contain the employee’s ID number and age, with that ID number linking to a second table that lists each of the skills associated with the employee.


Third Normal Form (3NF)


In 3NF, the table you have must already be in 2NF form, with the added rule of removing the transitive functional dependency of the non-prime attribute of any super key. Transitive functional dependency occurs if the dependency is the result of a pair of functional dependencies. For example, the relationship between A and C is a transitive dependency if A depends on B, B depends on C, but B doesn’t depend on A.


Example


Let’s say a school creates a “Students” table with the following attributes:


  • Student ID
  • Name
  • Zip Code
  • State
  • City
  • District

In this case, the “State,” “District,” and “City” attributes all depend on the “Zip Code” attribute. That “Zip” attribute depends on the “Student ID” attribute, making “State,” “District,” and “City” all transitively depending on “Student ID.”


To resolve this problem, you’d create a pair of tables – “Student” and “Student Zip.” The “Student” table contains the “Student ID,” “Name,” and “Zip Code” attributes, with that “Zip Code” attribute being the primary key of a “Student Zip” table that contains the rest of the attributes and links to the “Student” table.



Boyce-Codd Normal Form (BCNF)


Often referred to as 3.5NF, BCNF is a stricter version of 3NF. So, this normalization in DBMS rule occurs if your table is in 3NF, and for every functional dependence between two fields (i.e., A -> B), A is the super key of your table.


Example


Sticking with the school example, every student in a school has multiple classes. The school has a table with the following fields:


  • Student ID
  • Nationality
  • Class
  • Class Type
  • Number of Students in Class

You have several functional dependencies here:


  • Student ID -> Nationality
  • Class -> Number of Students in Class, Class Type

As a result, both the “Student ID” and “Class” attributes are candidate keys but can’t serve as keys alone. To achieve BCNF normalization, you’d break the above table into three – “Student Nationality,” “Student Class,” and “Class Mapping,” allowing “Student ID” and “Class” to serve as primary keys in their own tables.


Fourth Normal Form (4NF)


In 4NF, the database must meet the requirements of BCNF, in addition to containing no more than a single multivalued dependency. It’s often used in academic circles, as there’s little use for 4NF elsewhere.


Example


Let’s say a college has a table containing the following fields:


  • College Course
  • Lecturer
  • Recommended Book

Each of these attributes is independent of the others, meaning each can change without affecting the others. For example, the college could change the lecturer of a course without altering the recommended reading or the course’s name. As such, the existence of the course depends on both the “Lecturer” and “Recommended Book” attributes, creating a multivalued dependency. If a DBMS has more than one of these types of dependencies, it’s a candidate for 4NF normalization.


Fifth Normal Form (5NF)


If your table is in 4NF, has no join dependencies, and all joining is lossless, it’s in 5NF. Think of this as the final form when it comes to normalization in DBMS, as you’ve broken your table down so much that you’ve made redundancy impossible.


Example


A college may have a table that tells them which lecturers teach certain subjects during which semesters, creating the following attributes:


  • Subject
  • Lecturer Name
  • Semester

Let’s say one of the lecturers teaches both “Physics” and “Math” for “Semester 1,” but doesn’t teach “Math” for Semester 2. That means you need to combine all of the fields in this table to get an accurate dataset, leading to redundancy. Add a third semester to the mix, especially if that semester has no defined courses or lecturers, and you have to join dependencies.


The 5NF solution is to break this table down into three tables:


  • Table 1 – Contains the “Semester” and “Subject” attributes to show which subjects are taught in each semester.
  • Table 2 – Contains the “Subject” and “Lecturer Name” attributes to show which lecturers teach a subject.
  • Table 3 – Contains the “Semester” and “Lecturer Name” attributes so you can see which lecturers teach during which semesters.

Benefits of Normalization in DBMS


With normalization in DBMS being so much work, you need to know the following benefits to show that it’s worth your effort:


  • Improved database efficiency
  • Better data consistency
  • Easier database maintenance
  • Simpler query processing
  • Better access controls, resulting in superior security

Limitations and Trade-Offs of Normalization


Normalization in DBMS does have some drawbacks, though these are trade-offs that you accept for the above benefits:


  • The larger your database gets, the more demands it places on system performance.
  • Breaking tables down leads to complexity.
  • You have to find a balance between normalization and denormalization to ensure your tables make sense.

Practical Tips for Mastering Normalization Techniques


Getting normalization in DBMS is hard, especially when you start feeling like you’re dividing tables into so many small tables that you’re losing track of the database. These tips help you apply normalization correctly:


  • Understand the database requirements – Your database exists for you to extract data from it, so knowing what you’ll need to extract indicates whether you need to normalize tables or not.
  • Document all functional dependencies – Every functional dependence that exists in your database makes the table in which it exists a candidate for normalization. Identify each dependency and document it so you know whether you need to break the table down.
  • Use software and tools – You’re not alone when poring through your database. There are plenty of tools available that help you to identify functional dependencies. Many make normalization suggestions, with some even being able to carry out those suggestions for you.
  • Review and refine – Every database evolves alongside its users, so continued refining is needed to identify new functional dependencies (and opportunities for normalization).
  • Collaborate with other professionals – A different set of eyes on a database may reveal dependencies and normalization opportunities that you don’t see.

Make Normalization Your New Norm


Normalization may seem needlessly complex, but it serves the crucial role of making the data you extract from your database more refined, accurate, and free of repetition. Mastering normalization in DBMS puts you in the perfect position to create the complex databases many organizations need in a Big Data world. Experiment with the different “normal forms” described in this article as each application of the techniques (even for simple tables) helps you get to grips with normalization.

Read the article
An Introduction to Python Basics
Avatar
John Loewen
June 24, 2023 · min read

Just like the snake it’s named after, Python has wrapped itself around the programming world, becoming a deeply entrenched teaching and practical tool since its 1991 introduction. It’s one of the world’s most used programming languages, with Statista claiming that 48.07% of programmers use it, making it as essential as SQL, C, and even HTML to computer scientists.


This article serves as an introduction to Python programming for beginners. You’ll learn Python basics, such as how to install it and the concepts that underpin the language. Plus, we’ll show you some basic Python code you can use to have a little play around with the language.


Python Basics


It stands to reason that you need to download and install Python onto your system before you can start using it. The latest version of Python is always available at Python.org. Different versions are available for Windows, Linux, macOS, iOS, and several other machines and operating systems.


Installing Python is a universal process across operating systems. Download the installer for your OS from Python.org and open its executable. Follow the instructions and you should have Python up and running, and ready for you to play around with some Python language basics, in no time.


Python IDEs and Text Editors


Before you can start coding in your newly-installed version of Python, you need to install an integrated development environment (IDE) to your system. These applications are like a bridge between the language you write in and the visual representation of that language on your screen. But beyond being solely source code editors, many IDEs serve as debuggers, compilers, and even feature automation that can complete code (or at least offer suggestions) on your behalf.


Some of the best Python IDEs include:


  • Atom
  • Visual Studio
  • Eclipse
  • PyCharm
  • Komodo IDE

But there are plenty more besides. Before choosing an IDE, ask yourself the following questions to determine if the IDE you’re considering is right for your Python project:


  • How much does it cost?
  • Is it easy to use?
  • What are its debugging and compiling features?
  • How fast is the IDE?
  • Does this IDE give me access to the libraries I’ll need for my programs?

Basic Python Concepts


Getting to grips with the Python basics for beginners starts with learning the concepts that underpin the language. Each of these concepts defines actions you can take in the language, meaning they’re essentially for writing even the simplest of programs.


Variables and Data Types


Variables in Python work much like they do for other programming languages – they’re containers in which you store a data value. The difference between Python and other languages is that Python doesn’t have a specific command used to declare a variable. Instead, you create a variable the moment you assign a value to a data type.


As for data types, they’re split into several categories, with most having multiple sub-types you can use to define different variables:


  • String – “str”
  • Numeric – “int,” “complex,” “float”
  • Sequence – “list,” “range,” “tuple”
  • Boolean – “bool”
  • Binary – “memoryview,” “bytes,” “bytearray”

There are more, though the above should be enough for your Python basics notes. Each of these data types serves a different function. For example, on the numerical side, “int” allows you to store signed integers of no defined length, while “float” lets you assign decimals up to 15 points.


Operators


When you have your variables and values, you’ll use operators to perform actions using them. These actions range from the simple (adding and subtracting numbers) to the complex (comparing values to each other). Though there are many types of operators you’ll learn as you venture beyond the Python language basics, the following three are some of the most important for basic programs:


  • Arithmetic operators – These operators allow you to handle most aspects of basic math, including addition, subtraction, division, and multiplication. There are also arithmetic operators for more complex operations, including floor division and exponentiation.
  • Comparison operators – If you want to know which value is bigger, comparison operators are what you use. They take two values, compare them, and give you a result based on the operator’s function.
  • Logical operators – “And,” “Or,” and “Not” are your logical operators and they combine to form conditional statements that give “True” or “False”

Control Structures


As soon as you start introducing different types of inputs into your code, you need control structures to keep everything organized. Think of them as the foundations of your code, directing variables to where they need to go while keeping everything, as the name implies, under control. Two of the most important control structures are:


  • Conditional Statements – “If,” “Else,” and “elif” fall into this category. These statements basically allow you to determine what the code does “if” something is the case (such as a variable equaling a certain number) and what “else” to do if the condition isn’t met.
  • Loops – “For” and “while” are your loop commands, with the former being used to create an iterative sequence, with the latter setting the condition for that sequence to occur.

Functions


You likely don’t want every scrap of code you write to run as soon as you start your program. Some chunks (called functions) should only run when they’re called by other parts of the code. Think of it like giving commands to a dog. A function will only sit, stay, or roll over when another part of the code tells it to do what it does.


You need to define and call functions.


Use the “def” keyword to define a function, as you see in the following example:


def first_function():


print (“This is my first function”)


When you need to call that function, you simply type the function’s name followed by the appropriate parenthesis:


first_function()


That “call” tells your program to print out the words “This is my first function” on the screen whenever you use it.


Interestingly, Python has a collection of built-in functions, which are functions included in the language that anybody can call without having to first define the function. Many relate to the data types discussed earlier, with functions like “str()” and “int()” allowing you to define strings and integers respectively.



Python – Basic Programs


Now that you’ve gotten to grips with some of the Python basics for beginners, let’s look at a few simple programs that almost anybody can run.


Hello, World! Program


The starting point for any new coder in almost any new language is to get the screen to print out the words “Hello, World!”. This one is as simple as you can get, as you’ll use the print command to get a piece of text to appear on screen:


print(‘Hello, World! ‘)


Click what “Run” button in your IDE of choice and you’ll see the words in your print command pop up on your monitor. Though this is all simple enough, make sure you make note of the use of the apostrophes/speech mark around the text. If you don’t have them, your message doesn’t print.


Basic Calculator Program


Let’s step things up with one of the Python basic programs for beginners that helps you to get to grips with functions. You can create a basic calculator using the language by defining functions for each of your arithmetic operators and using conditional statements to tell the calculator what to do when presented with different options.


The following example comes from Programiz.com:


# This function adds two numbers


def add(x, y):


return x + y


# This function subtracts two numbers


def subtract(x, y):


return x – y


# This function multiplies two numbers


def multiply(x, y):


return x * y


# This function divides two numbers


def divide(x, y):


return x / y


print(“Select operation.”)


print(“1.Add”)


print(“2.Subtract”)


print(“3.Multiply”)


print(“4.Divide”)


while True:


# Take input from the user


choice = input(“Enter choice(1/2/3/4): “)


# Check if choice is one of the four options


if choice in (‘1’, ‘2’, ‘3’, ‘4’):


try:


num1 = float(input(“Enter first number: “))


num2 = float(input(“Enter second number: “))


except ValueError:


print(“Invalid input. Please enter a number.”)


continue


if choice == ‘1’:


print(num1, “+”, num2, “=”, add(num1, num2))


elif choice == ‘2’:


print(num1, “-“, num2, “=”, subtract(num1, num2))


elif choice == ‘3’:


print(num1, “*”, num2, “=”, multiply(num1, num2))


elif choice == ‘4’:


print(num1, “/”, num2, “=”, divide(num1, num2))


# Check if user wants another calculation


# Break the while loop if answer is no


next_calculation = input(“Let’s do next calculation? (yes/no): “)


if next_calculation == “no”:


break


else:


print(“Invalid Input”)


When you run this code, your executable asks you to choose a number between 1 and 4, with your choice denoting which mathematical operator you wish to use. Then, you enter your values for “x” and “y”, with the program running a calculation between those two values based on the operation choice. There’s even a clever piece at the end that asks you if you want to run another calculation or cancel out of the program.


Simple Number Guessing Game


Next up is a simple guessing game that takes advantage of the “random” module built into Python. You use this module to generate a number between 1 and 99, with the program asking you to guess which number it’s chosen. But unlike when you play this game with your sibling, the number doesn’t keep changing whenever you guess the right answer.


This code comes from Python for Beginners:


import random


n = random.randint(1, 99)


guess = int(input(“Enter an integer from 1 to 99: “))


while True:


if guess < n:


print (“guess is low”)


guess = int(input(“Enter an integer from 1 to 99: “))


elif guess > n:


print (“guess is high”)


guess = int(input(“Enter an integer from 1 to 99: “))


else:


print (“you guessed it right! Bye!”)


break


Upon running the code, your program uses the imported “random” module to pick its number and then asks you to enter an integer (i.e., a whole number) between 1 and 99. You keep guessing until you get it right and the program delivers a “Bye” message.


Python Libraries and Modules


As you move beyond the basic Python language introduction and start to develop more complex code, you’ll find your program getting a bit on the heavy side. That’s where modules come in. You can save chunks of your code into a module, which is a file with the “.py” extension, allowing you to call that module into another piece of code.


Typically, these modules contain functions, variables, and classes that you want to use at multiple points in your main program. Retyping those things at every instance where they’re called takes too much time and leaves you with code that’s bogged down in repeated processes.


Libraries take things a step further by offering you a collection of modules that you can call from as needed, similar to how you can borrow any book from a physical library. Examples include the “Mayplotlib” library, which features a bunch of modules for data visualization, and “Beautiful Soup,” which allows you to extract data from XML and HTML files.



Best Practices and Tips for Basic Python Programs for Beginners


Though we’ve focused primarily on the code aspect of the language in these Python basic notes so far, there are a few tips that will help you create better programs that aren’t directly related to learning the language:


  • Write clean code – Imagine that you’re trying to find something you need in a messy and cluttered room. It’s a nightmare to find what you’re looking for because you’re constantly tripping over stuff you don’t need. That’s what happens in a Python program if you create bloated code or repeat functions constantly. Keep it clean and your code is easier to use.
  • Debugging and error handling – Buggy code is frustrating to users, especially if that code just dumps them out of a program when it hits an error. Beyond debugging (which everybody should do as standard) you must build error responses into your Python code to let users know what’s happening when something goes wrong.
  • Use online communities and resources – Python is one of the most established programming languages in the world, and there’s a massive community built up around it. Take advantage of those resources. Try your hand at a program first, then take it to the community to see if they can point you in the right direction.

Get to Grips With the Basic Concepts of Python


With these Python introduction notes, you have everything you need to understand some of the more basic aspects of the language, as well as run a few programs. Experimentation is your friend, so try taking what you’ve learned here and writing a few other simple programs for yourself. Remember – the Python community (along with stacks of online resources) are available to help you when you’re struggling.

Read the article