Categories
servers Service software Storage

IT Simplified: Software Defined Data Center (SDDC)

What is an SDDC?

An SDDC is a traditional data center facility where organizational data, applications, networks, and infrastructure are centrally housed and accessed. It is the hub for IT operations and physical infrastructure equipment, including servers, storage devices, network equipment, and security devices.

In contrast, a software-defined data center is an IT-as-a-Service (ITaaS) platform that services an organization’s software, infrastructure, or platform needs. An SDDC can be housed on-premise, at an MSP, and in private, public, or hosted clouds. (For our purposes, we will discuss the benefits of hosting an SDDC in the cloud.) Like traditional data centers, SDDCs also host servers, storage devices, network equipment, and security devices.

Here’s where the differences come in.

Unlike traditional data centers, an SDDC uses a virtualized environment to deliver a programmatic approach to the functions of a traditional data center. SDDCs rely heavily on virtualization technologies to abstract, pool, manage, and deploy data center functions. Like server virtualization concepts used for years, SDDCs abstract, pool, and virtualize all data center services and resources in order to:

  1. Reduce IT resource usage
  2. Provide automated deployment and management
  3. Increased flexibility
  4. Business agility

Key SDDC Architectural Components include:

  • Compute virtualization, where virtual machines (VMs)—including their operating systems, CPUs, memory, and software—reside on cloud servers. Compute virtualization allows users to create software implementations of computers that can be spun up or spun down as needed, decreasing provisioning time.
  • Network virtualization, where the network infrastructure servicing your VMs can be provisioned without worrying about the underlying hardware. Network infrastructure needs—telecommunications, firewalls, subnets, routing, administration, DNS, etc.—are configured inside your cloud SDDC on the vendor’s abstracted hardware. No network hardware assembly is required.
  • Storage virtualization, where disk storage is provisioned from the SDDC vendor’s storage pool. You get to choose your storage types, based on your needs and costs. You can quickly add storage to a VM when needed.
  • Management and automation software. SDDCs use management and automation software to keep business critical functions working around the clock, reducing the need for IT manpower. Remote management and automation is delivered via a software platform accessible from any suitable location, via APIs or Web browser access.

What is the difference between SDDC and cloud?

A software-defined data center differs from a private cloud, since a private cloud only has to offer virtual-machine self-service, beneath which it could use traditional provisioning and management. Instead, SDDC concepts imagine a data center that can encompass private, public, and hybrid clouds.

Click here for more content like this

Categories
computing Storage

IT Simplified: Intel® Virtual RAID on CPU

Intel® Virtual RAID on CPU (Intel® VROC) is an enterprise RAID solution that unleashes the performance of NVMe SSDs. Intel® VROC is enabled by a feature in Intel® Xeon® Scalable processors called Intel® Volume Management Device (Intel® VMD), an integrated controller inside the CPU PCIe root complex. Intel® VMD isolates SSD error and event handling from the operating system to help reduce system crashes and reboots. NVMe SSDs are directly connected to the CPU, allowing the full performance potential of fast storage devices, such as Intel® Optane™ SSDs, to be realized. Intel® VROC enables these benefits without the complexity, cost, and power consumption of traditional hardware RAID host bus adapter (HBA) cards placed between the drives and the CPU.

Features of VROC include:

  1. Enterprise reliability: Increased protection from data loss and corruption in various failure scenarios such as unexpected power loss, even when a volume is degraded.
  2. Extended Management Tools: Pre-OS and OS management includes HII, CLI, email alerts, and a Windows GUI, all supporting NVMe and SATA controls.
  3. Integrated caching: Intel® VROC Integrated Caching allows easy addition of a Intel® Optane™ SSD caching layer to accelerate RAID storage arrays.
  4. Boot RAID: Redundancy for OS images directly off the CPU with pre-OS configuration options for platform set-up.
  5. High performance storage: Connect NVMe SSDs directly to the CPU for full bandwidth storage connections.

Intel VROC support the below three sets that requires a licensing mechanism to activate:

  1. Intel VROC Standard
  2. Intel VROC Premium
  3. Intel VROC Intel SSD Only

The VROC feature debuted in 2017 to simplify and reduce the cost of high-performance storage arrays, and it has enjoyed broad uptake in enterprise applications. The feature brings NVMe RAID functionality on-die to the CPU for SSD storage devices, thus providing many of the performance, redundancy, bootability, manageability, and serviceability benefits that were previously only accessible with an additional device, like a RAID card or HBA. Thus, VROC gives users a host of high-performance storage features without the added cost, power consumption, heat, and complexity of another component, like a RAID card or HBA, in the chassis — not to mention extra cabling.

In order to have VROC work on a system, the requirements were something like:

  1. Intel Xeon Scalable (Skylake or newer) system with BIOS support for Intel VMD
  2. The motherboard needs a header for the VROC hardware key
  3. A VROC hardware key needs to be installed with the correct level of functionality you want
Categories
computing

IT Simplified: Quantum Computing

The term ‘quantum chip,’ or ‘quantum processing unit,’ or ‘QPU’ refers to a physical (fabricated) chip that has several interconnected qubits, or quantum bits. A QPU serves as the basic building block of a complete quantum computer, which also consists of the control electronics, the QPU’s housing environment, and several other parts. A QPU enables a form of computation that’s built on quantum physics— nature’s operating system

How a QPU differs from a CPU 

A quantum computer (with QPU) operates in a fundamentally different way from a classical computer. A bit’ in traditional computing (using a CPU) is a very small unit of binary data, which can be either a one or a zero. A processed bit is implemented by one of two levels of low DC voltage in traditional computer systems. The most recent variation of this concept is the ‘qubit,’ or quantum bit. Qubits use the phenomenon of superposition, which enables them to be in numerous states simultaneously, in contrast to binary bits, which can only be in either/or situations. 

The capabilities of quantum computers surpass those of traditional computers, especially in terms of speed. A quantum computer can complete a task in two or three stages, whereas a classical computer needs thousands of steps to handle the same problem. Therefore, most quantum chips or QPUs are used as accelerators in heterogeneous multi-core computing devices. These can be connected with a classical processor (CPU) to provide a performance boost that cannot be achieved classically. The classical control circuitry and the quantum processing unit (QPU) are two of the most important components of hardware in quantum computing. Some of the subsystems of the QPU include registers and gates (sometimes called QRAM), a quantum control unit for driving the system states, and circuitry to interface between the classical host CPU and the QPU. However, there are various types of QPUs with variations in the subsystems as well as the underlying principles  Quantum computers have the potential to revolutionize the field of computing, but they also come with a number of disadvantages. Some of the main challenges and limitations include noise and DE coherence, scalability, error correction, lack of robust quantum algorithms, high cost, and power consumption. A quantum computer isn’t a supercomputer that can do everything faster. In fact, one of the goals of quantum computing research is to study which problems can be solved by a quantum computer faster than a classical computer and how large the speedup can be. Quantum computers do exceptionally well with problems that require calculating a large number of possible combinations. These types of problems can be found in many areas, such as quantum simulation, cryptography, quantum machine learning, and search problems. As the global community of quantum researchers, scientists, engineers, and business leaders collaborate to advance the quantum ecosystem, we expect to see quantum impact accelerate across every industry.

Categories
cloud computing Service

IT Simplified: Software As A Service

SaaS, or software-as-a-service, is application software hosted on the cloud and used over an internet connection via a web browser, mobile app or thin client. The SaaS provider is responsible for operating, managing and maintaining the software and the infrastructure on which it runs. The customer simply creates an account, pays a fee, and gets to work.

SaaS applications are sometimes called on-demand software, or hosted software. Whatever the name, SaaS applications run on a SaaS provider’s infrastructure. The provider manages access to the application, including security, availability, and performance.

SaaS Characteristics

A good way to understand the SaaS model is by thinking of a bank, which protects the privacy of each customer while providing service that is reliable and secure—on a massive scale. A bank’s customers all use the same financial systems and technology without worrying about anyone accessing their personal information without authorisation.

Multitenant Architecture

A multitenant architecture, in which all users and applications share a single, common infrastructure and code base that is centrally maintained. Because SaaS vendor clients are all on the same infrastructure and code base, vendors can innovate more quickly and save the valuable development time previously spent on maintaining numerous versions of outdated code.

Easy Customisation

The ability for each user to easily customise applications to fit their business processes without affecting the common infrastructure. Because of the way SaaS is architected, these customisations are unique to each company or user and are always preserved through upgrades. That means SaaS providers can make upgrades more often, with less customer risk and much lower adoption cost.

Better Access

Improved access to data from any networked device while making it easier to manage privileges, monitor data use, and ensure everyone sees the same information at the same time.

SaaS Harnesses the Consumer Web

Anyone familiar wit  Office 365 will be familiar with the Web interface of typical SaaS applications. With the SaaS model, you can customise with point-and-click ease, making the weeks or months it takes to update traditional business software seem hopelessly old fashioned.

SaaS takes advantage of cloud computing infrastructure and economies of scale to provide customers a more streamlined approach to adopting, using and paying for software.

Categories
computing security

IT Simplified: Encryption

Encryption is a way of scrambling data so that only authorized parties can understand the information. In technical terms, it is the process of converting human-readable plaintext to incomprehensible text, also known as ciphertext. In simpler terms, encryption takes readable data and alters it so that it appears random. Encryption requires the use of a cryptographic key: a set of mathematical values that both the sender and the recipient of an encrypted message agree on.encryption example

Although encrypted data appears random, encryption proceeds in a logical, predictable way, allowing a party that receives the encrypted data and possesses the right key to decrypt the data, turning it back into plaintext. Truly secure encryption will use keys complex enough that a third party is highly unlikely to decrypt or break the ciphertext by brute force — in other words, by guessing the key.

Data can be encrypted “at rest,” when it is stored, or “in transit,” while it is being transmitted somewhere else.

What is a key in cryptography?

A cryptographic key is a string of characters used within an encryption algorithm for altering data so that it appears random. Like a physical key, it locks (encrypts) data so that only someone with the right key can unlock (decrypt) it.

What are the different types of encryption?

The two main kinds of encryption are symmetric encryption and asymmetric encryption. Asymmetric encryption is also known as public key encryption.

In symmetric encryption, there is only one key, and all communicating parties use the same (secret) key for both encryption and decryption. In asymmetric, or public key, encryption, there are two keys: one key is used for encryption, and a different key is used for decryption. The decryption key is kept private (hence the “private key” name), while the encryption key is shared publicly, for anyone to use (hence the “public key” name). Asymmetric encryption is a foundational technology for TLS (often called SSL).

Why is data encryption necessary?

Privacy: Encryption ensures that no one can read communications or data at rest except the intended recipient or the rightful data owner. This prevents attackers, ad networks, Internet service providers, and in some cases governments from intercepting and reading sensitive data, protecting user privacy.

Security: Encryption helps prevent data breaches, whether the data is in transit or at rest. If a corporate device is lost or stolen and its hard drive is properly encrypted, the data on that device will still be secure. Similarly, encrypted communications enable the communicating parties to exchange sensitive data without leaking the data.

Data integrity: Encryption also helps prevent malicious behavior such as on-path attacks. When data is transmitted across the Internet, encryption ensures that what the recipient receives has not been viewed or tampered with on the way.

Regulations: For all these reasons, many industry and government regulations require companies that handle user data to keep that data encrypted. Examples of regulatory and compliance standards that require encryption include HIPAA, PCI-DSS, and the GDPR.

Categories
Service software

IT Simplified: Natural language processing

Natural language processing (NLP) is an area of computer science and artificial intelligence concerned with the interaction between computers and humans in natural language. The ultimate goal of NLP is to help computers understand language as well as we do. It is the driving force behind things like virtual assistants, speech recognition, sentiment analysis, automatic text summarization, machine translation and much more.

Data generated from conversations, declarations or even tweets are examples of unstructured data. Unstructured data doesn’t fit neatly into the traditional row and column structure of relational databases, and represent the vast majority of data available in the actual world  Nowadays it is no longer about trying to interpret a text or speech based on its keywords (the old fashioned mechanical way), but about understanding the meaning behind those words (the cognitive way) 

It is a discipline that focuses on the interaction between data science and human language, and is scaling to lots of industries. Today NLP is booming thanks to the huge improvements in the access to data and the increase in computational power, which are allowing practitioners to achieve meaningful results in areas like healthcare, media, finance and human resources, among others.

Use Cases of NLP

In simple terms, NLP represents the automatic handling of natural human language like speech or text, and although the concept itself is fascinating, the real value behind this technology comes from the use cases.

NLP can help you with lots of tasks and the fields of application just seem to increase on a daily basis. Let’s mention some examples:

  • NLP enables the recognition and prediction of diseases based on electronic health records and patient’s own speech. This capability is being explored in health conditions that go from cardiovascular diseases to depression and even schizophrenia. For example, Amazon Comprehend Medical is a service that uses NLP to extract disease conditions, medications and treatment outcomes from patient notes, clinical trial reports and other electronic health records.
  • Organizations can determine what customers are saying about a service or product by identifying and extracting information in sources like social media. This sentiment analysis can provide a lot of information about customers choices and their decision drivers.
  • Companies like Yahoo and Google filter and classify your emails with NLP by analyzing text in emails that flow through their servers and stopping spam before they even enter your inbox.
  • Amazon’s Alexa and Apple’s Siri are examples of intelligent voice driven interfaces that use NLP to respond to vocal prompts and do everything like find a particular shop, tell us the weather forecast, suggest the best route to the office or turn on the lights at home.
Categories
cloud computing servers Storage

IT Simplified: Hyperconverged Infrastructure

Hyperconverged infrastructure (HCI) is a combination of servers and storage into a distributed infrastructure platform with intelligent software to create flexible building blocks that replace legacy infrastructure consisting of separate servers, storage networks, and storage arrays.Hyper-converged infrastructure (HCI) is a paradigm shift in data center technologies that aims to:

Categories
Uncategorized

IT Simplified: Virtual Desktop Infrastructure

Virtual Desktop Infrastructure (VDI) is a technology that refers to the use of virtual machines to provide and manage virtual desktops. VDI hosts desktop environments on a centralized server and deploys them to end-users on request.
In VDI, a hypervisor segments servers into virtual machines that in turn host virtual desktops, which users access remotely from their devices. Users can access these virtual desktops from any device or location, and all processing is done on the host server. Users connect to their desktop instances through a connection broker, which is a software-based gateway that acts as an intermediary between the user and the server.

VDI can be either persistent or non-persistent. Each type offers different benefits:

With persistent VDI, a user connects to the same desktop each time, and users can personalize the desktop for their needs since changes are saved even after the connection is reset. In other words, desktops in a persistent VDI environment act like personal physical desktops.
In non-persistent VDI, where users connect to generic desktops and no changes are saved, it is usually simpler and cheaper since there is no need to maintain customized desktops between sessions. As a result, Nonpersistent VDI is often used in organizations with many task workers or employees who perform a limited set of repetitive tasks and don’t need a customized desktop.

VDI offers a number of advantages, such as user mobility, ease of access, flexibility and greater security. In the past, its high-performance requirements made it costly and challenging to deploy on legacy systems, which posed a barrier for many businesses. However, the rise in enterprise adoption of hyperconverged infrastructure (HCI) offers a solution that provides scalability and high performance at a lower cost.
Benefits of VDI

Although VDI’s complexity means that it isn’t necessarily the right choice for every organization, it offers a number of benefits for organizations that do use it. Some of these benefits include:

Remote access: VDI users can connect to their virtual desktop from any location or device, making it easy for employees to access all their files and applications and work remotely from anywhere in the world.
Cost savings: Since processing is done on the server, the hardware requirements for end devices are much lower. Users can access their virtual desktops from older devices, thin clients, or even tablets, reducing the need for IT to purchase new and expensive hardware.
Security: In a VDI environment, data lives on the server rather than the end client device. This serves to protect data if an endpoint device is ever stolen or compromised.
Centralized management: VDI’s centralized format allows IT to easily patch, update or configure all the virtual desktops in a system.

Categories
cloud computing security

IT Simplified: Cloud Security

Cloud computing security is a set of technologies and strategies that can help your organization protect cloud-based data, applications, and infrastructure, and comply with standards and regulations.

Identity management, privacy, and access control are especially important for cloud security because cloud systems are typically shared and Internet-facing resources. As more and more organizations use cloud computing and public cloud providers for their daily operations, they must prioritize appropriate security measures to address areas of vulnerability.

Security challenges in cloud computing:

Access Management

Often cloud user roles are configured very loosely, granting extensive privileges beyond what is intended or required. One common example is giving database delete or write permissions to untrained users or users who have no business need to delete or add database assets. At the application level, improperly configured keys and privileges expose sessions to security risks.

Compliance Violations

As regulatory controls around the world become more stringent, organizations must adhere to numerous compliance standards. By migrating to the cloud, you may be in violation of your compliance obligations.Most regulations and compliance standards require businesses to know where data is located, who can access it, and how it is managed and processed, which can all be challenging in a cloud environment. Other regulations require that cloud providers are certified for the relevant compliance standard.

Categories
security servers Storage Tech. Trends

IT Simplified: Network Firewall

A firewall is a network security device, either hardware or software-based, which monitors all incoming and outgoing traffic and based on a defined set of security rules it accepts, rejects or drops that specific traffic.A firewall establishes a barrier between secured internal networks and outside untrusted network, such as the Internet.

History and Need for Firewall

Before Firewalls, network security was performed by Access Control Lists (ACLs) residing on routers. ACLs are rules that determine whether network access should be granted or denied to specific IP address.But ACLs cannot determine the nature of the packet it is blocking. Also, ACL alone does not have the capacity to keep threats out of the network. Hence, the Firewall was introduced.

How Firewall Works

Firewall match the network traffic against the rule set defined in its table. Once the rule is matched, associate action is applied to the network traffic. For example, Rules are defined as any employee from HR department cannot access the data from code server and at the same time another rule is defined like system administrator can access the data from both HR and technical department. Rules can be defined on the firewall based on the necessity and security policies of the organization.

From the perspective of a cooperate business, network traffic can be either outgoing or incoming. Firewall maintains a distinct set of rules for both the cases. Mostly the outgoing traffic, originated from the server itself, allowed to pass. Still, setting a rule on outgoing traffic is always better in order to achieve more security and prevent unwanted communication.