Categories
AI Chatbots for Banking artificial intelligence computing Emails security software

IT Simplified: Generative AI

What is Generative AI?

Generative artificial intelligence (AI) algorithms, such as ChatGPT, help to create diverse content like audio,images and videos,code and simulations. By training on existing data, Generative AI identifies patterns and compiles them into a model. Despite lacking human-like thinking abilities, the data and processing power of AI models allow them to recognize its patterns.

Categories
computing software

IT Simplified: Stackable Switches

What are Stackable Switches?

In NETWORKING, the term “stackable switches” refers to a group of physical switches that have been cabled and grouped in one single logical switch. Over the years, stacking features have evolved from a premium (and costly feature) to a core capability of many enterprise-grade switches (and also in several SMB models).

It’s the opposite approach of a modular switch, where you have a single physical chassis with several slots and modules to grow your switch, used typically, at least in the past, in core switches. Both stackable and modular switches can provide a single management and control plane or at least a single configurable logical switch, with some kind of redundancy if you lose stackable switches or a module. Having a single logical switch, with better reliability, makes it easy to translate the logical network topology in physical topology.

What are Stacking Technologies?

In stackable switches, we usually build the stack with cables that connect all the switches in a specific topology. We connect those cables to specific ports of the switches, depending on the type of stacking.

  1. Backplane stacking (BPS), where specific stacking modules (usually on the back of the switch) are with specific cables (depending on the vendor).
  2. Front-plane stacking (FPS)- VSF, standard Ethernet ports to build the stack, using standard Ethernet cables.

The stacking topology also define the resiliency of the stacked solution, you can have typically different kind of cabling options (depending on the switch vendor and models):

  1. Daisy chain or Bus topology do not build switch stacks because it does not provide the desired level of resiliency.
  2. Ring or redundant dual ring provide resiliency, but with more than two switches the packet paths can be not optimal
  3. Mesh or full mesh provide higher resiliency and also optimal packet paths

To increase the resiliency of stacked switches, there are different solutions based on the concept of a “virtual chassis” with separated management and control planes. Usually, high-end switch models typically implement those solutions.

  1. Backplane stacking (BPS)-Vendors utilize specific stacking modules located on the back of the switch, along with specific cables.
  2. Front-plane stacking (FPS)-In VSF (Virtual Switching Framework), standard Ethernet ports hep to build the stack . This method involves using standard Ethernet cables.

Advantages of Stackable switches :

  1. Management Pane: Logical switch view with a single management interface, which  makes the management and operational tasks very easy. By enabling link aggregation between ports of separate physical switches in the same stack, it enhances bandwidth for downstream links. It simplifies network design by treating “multiple cables” across switches as one logical link using link aggregation solution.
  2. Less Expensive: They offer a cost-effective alternative to modular switches, while still delivering comparable scalability and improved flexibility. Resiliency and performance can be different (worse or better) depending on the implementation.
  3. Flexibility: You can typically mix various port speeds and media types, as well as different switch models with varying capabilities. For example, you can combine switches with PoE functions along with other models.

Disadvantages of Stackable switches :

  1. Performance: For SMB use cases, the stack ports and cable speed are enough to provide high bandwidth and low latency. But when speed increases or the stack expands you may increase the latency and decrease the overall performance.
  2. Stability: The stackable switch market is very mature and relatively stable. However, each vendor adds its unique set of features and functionalities. Different vendors utilize different types of connectors, cables and software for their stackable switches. This causes requirements to use the same product line of switches to take advantage of stacking (not necessarily the same model, because, for example, in Aruba 3810 Switch Series you can mix different models in the same stack).
  3. Resiliency: Depending on the stacking topology, if you have some faults your overall stack may not be operating correctly anymore. So be sure to choose the best topology and ensure higher resiliency on each stack member. For example, using dual power supplies to ensure hardware redundancy. The single management or control plane may also reduce the overall resiliency, but the problem is similar also on modular switches.
  4. Manageability: The single management interface is great, but there are also some drawbacks.Expanding an existing stack could cause an extended service disruption, such as when we reboot all the switches to add a stack member or due to a power failure. Second, removing a switch from a stack could be tricky or require a complex process. Last but not least, upgrading the firmware on all the stack members requires a complete reboot of all the switches.

Click for more IT-Related Content

Categories
security servers Service software

IT Simplified: DMARC

What is DMARC?

Domain-based Message Authentication, Reporting & Conformance (DMARC) is an open email authentication protocol that provides domain-level protection of the email channel. DMARC authentication detects and prevents email spoofing techniques used in phishing, business email compromise (BEC) and other email-based attacks.
DMARC, the sole widely adopted technology, enhances the trustworthiness of the “from” domain in email headers by leveraging existing standards.
The domain owner can establish a DMARC record in the DNS servers, specifying actions for unauthenticated emails.

To understand DMARC it is also important to know a few other mail authentication protocols  specifically SPF and DKIM. SPF Organizations can authorize senders within an SPF record published in the Domain Name System (DNS).
The record contains approved sender IP addresses, including those authorized to send emails on behalf of the organization. Publishing and checking SPF records provide a reliable defense against email threats that falsify “from” addresses and domains.
DKIM is an email authentication protocol enabling receivers to verify if an email was genuinely authorized by its owner. It allows an organization to take responsibility for transmitting a message by attaching a digital signature to it. Verification is done through cryptographic authentication using the signer’s public key published in the DNS. The signature ensures that parts of the email have not been modified since the time the digital signature was attached.

How DMARC works

How does DMARC Work?


To pass DMARC authentication, a message must successfully undergo SPF and SPF alignment checks or DKIM and DKIM alignment checks. If a message fails DMARC, senders can instruct receivers on what to do with that message via a DMARC policy. There are three DMARC policies the domain owner can enforce: none (the message is delivered to the recipient and the DMARC report is sent to the domain owner), quarantine (the message is moved to a quarantine folder) and reject (the message is not delivered at all).

The DMARC policy of “none” is a good first step. This way, the domain owner can ensure that all legitimate email is authenticating properly. The domain owner receives DMARC reports to help them make sure that all legitimate email is identified and passes authentication. Once the domain owner is confident they have identified all legitimate senders and have fixed authentication issues, they can move to a policy of “reject” and block phishing, business email compromise, and other email fraud attacks. As an email receiver, an organization can ensure that its secure email gateway enforces the DMARC policy implemented to the domain owner.

What is DMARC in Marketing Cloud?

DMARC can be used by email service providers and domain owners to set policies that limit the usage of their domain. One such policy is restricting the domain’s usage in “from” addresses, which effectively prohibits anyone from using the domain in the “from” field except when using the provider’s webmail interface. any email service provider or domain owner can publish this type of restrictive DMARC policy can be published by Having a powerful CLOUD SERVICES is very important as will protect employees against inbound email threats.

Points to note while authenticating DMARC:

  • Due to the volume of DMARC reports that an email sender can receive and the lack of clarity provided within DMARC reports, fully implementing DMARC authentication can be difficult.
  • DMARC parsing tools can help organizations make sense of the information included within DMARC reports.
  • Additional data and insights beyond what’s included within DMARC reports help organizations to identify email senders faster and more accurately. This helps speed up the process of implementing DMARC authentication and reduces the risk of blocking legitimate email.
  • Organizations can create a DMARC record in minutes and start gaining visibility through DMARC reports by enforcing a DMARC policy of “none.”
  • By properly identifying all legitimate email senders – including third-party email service providers—and fixing any authentication issues, organizations should reach a high confidence level before enforcing a DMARC policy of “reject”.

Click for more IT-related content

Categories
cloud computing software

IT Simplified : Containers and their Benefits

What is a Container?

Container is a software solution that wraps your software process or microservice to make it executable in all computing environments. In general, you can store all kinds of executable files in containers, for example, configuration files, software code, libraries, and binary programs.

By computing environments, we mean the local systems, on-premises data centres , and cloud platforms managed by various service providers. ‍Users can access them from anywhere.

However, application processes or microservices in cloud-based containers remain separate from cloud infrastructure. Picture containers as Virtual Operating Systems that wrap your application so that it is compatible with any OS. As the application is not bound to a particular cloud, operating system, or storage space, containerized software can execute in any environment.

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application:– code, runtime, system tools, system libraries and settings. All Google applications, like GMail and Google Calendar, are containerized and run on their cloud server.

A typical container image, or application container, consists of:

  • The application code
  • Configuration files
  • Software dependencies
  • Libraries
  • Environment variables

Containerization ensures that none of these stages depend on an OS kernel. So, containers do not carry any Guest OS with them the way a Virtual Machine must. Containerized applications are tied to all their dependencies as a single deployable unit. Leveraging the features and capabilities of the host OS, containers enable these software apps to work in all environments.

What Are the Benefits of A Container?

Container solutions are highly beneficial for businesses as well as software developers due to multiple reasons. After all, containers technology has made it possible to develop, test, deploy, scale, re-build, and destroy applications for various platforms or environments using the same method. Advantages of containerization include:

  • Containers require fewer system resources than virtual machines as they do not bind operating system images to each application they store.
  • They are highly interoperable as containerized apps can use the host OS.
  • Optimized resource usage as container computing lets similar apps share libraries and binary files.
  • No hardware-level or implementation worries since containers are infrastructure-independent.
  • Better portability because you can migrate and deploy containers anywhere smoothly.
  • Easy scaling and development because containerization technology allows gradual expansion and parallel testing of apps.
Categories
Service software

IT Simplified: Natural language processing

Natural language processing (NLP) is an area of computer science and artificial intelligence concerned with the interaction between computers and humans in natural language. The ultimate goal of NLP is to help computers understand language as well as we do. It is the driving force behind things like virtual assistants, speech recognition, sentiment analysis, automatic text summarization, machine translation and much more.

Data generated from conversations, declarations or even tweets are examples of unstructured data. Unstructured data doesn’t fit neatly into the traditional row and column structure of relational databases, and represent the vast majority of data available in the actual world  Nowadays it is no longer about trying to interpret a text or speech based on its keywords (the old fashioned mechanical way), but about understanding the meaning behind those words (the cognitive way) 

It is a discipline that focuses on the interaction between data science and human language, and is scaling to lots of industries. Today NLP is booming thanks to the huge improvements in the access to data and the increase in computational power, which are allowing practitioners to achieve meaningful results in areas like healthcare, media, finance and human resources, among others.

Use Cases of NLP

In simple terms, NLP represents the automatic handling of natural human language like speech or text, and although the concept itself is fascinating, the real value behind this technology comes from the use cases.

NLP can help you with lots of tasks and the fields of application just seem to increase on a daily basis. Let’s mention some examples:

  • NLP enables the recognition and prediction of diseases based on electronic health records and patient’s own speech. This capability is being explored in health conditions that go from cardiovascular diseases to depression and even schizophrenia. For example, Amazon Comprehend Medical is a service that uses NLP to extract disease conditions, medications and treatment outcomes from patient notes, clinical trial reports and other electronic health records.
  • Organizations can determine what customers are saying about a service or product by identifying and extracting information in sources like social media. This sentiment analysis can provide a lot of information about customers choices and their decision drivers.
  • Companies like Yahoo and Google filter and classify your emails with NLP by analyzing text in emails that flow through their servers and stopping spam before they even enter your inbox.
  • Amazon’s Alexa and Apple’s Siri are examples of intelligent voice driven interfaces that use NLP to respond to vocal prompts and do everything like find a particular shop, tell us the weather forecast, suggest the best route to the office or turn on the lights at home.
Categories
cloud computing servers Storage

IT Simplified: Hyperconverged Infrastructure

Hyperconverged infrastructure (HCI) is a combination of servers and storage into a distributed infrastructure platform with intelligent software to create flexible building blocks that replace legacy infrastructure consisting of separate servers, storage networks, and storage arrays.Hyper-converged infrastructure (HCI) is a paradigm shift in data center technologies that aims to:

Categories
Uncategorized

IT Simplified: Virtual Desktop Infrastructure

Virtual Desktop Infrastructure (VDI) is a technology that refers to the use of virtual machines to provide and manage virtual desktops. VDI hosts desktop environments on a centralized server and deploys them to end-users on request.
In VDI, a hypervisor segments servers into virtual machines that in turn host virtual desktops, which users access remotely from their devices. Users can access these virtual desktops from any device or location, and all processing is done on the host server. Users connect to their desktop instances through a connection broker, which is a software-based gateway that acts as an intermediary between the user and the server.

VDI can be either persistent or non-persistent. Each type offers different benefits:

With persistent VDI, a user connects to the same desktop each time, and users can personalize the desktop for their needs since changes are saved even after the connection is reset. In other words, desktops in a persistent VDI environment act like personal physical desktops.
In non-persistent VDI, where users connect to generic desktops and no changes are saved, it is usually simpler and cheaper since there is no need to maintain customized desktops between sessions. As a result, Nonpersistent VDI is often used in organizations with many task workers or employees who perform a limited set of repetitive tasks and don’t need a customized desktop.

VDI offers a number of advantages, such as user mobility, ease of access, flexibility and greater security. In the past, its high-performance requirements made it costly and challenging to deploy on legacy systems, which posed a barrier for many businesses. However, the rise in enterprise adoption of hyperconverged infrastructure (HCI) offers a solution that provides scalability and high performance at a lower cost.
Benefits of VDI

Although VDI’s complexity means that it isn’t necessarily the right choice for every organization, it offers a number of benefits for organizations that do use it. Some of these benefits include:

Remote access: VDI users can connect to their virtual desktop from any location or device, making it easy for employees to access all their files and applications and work remotely from anywhere in the world.
Cost savings: Since processing is done on the server, the hardware requirements for end devices are much lower. Users can access their virtual desktops from older devices, thin clients, or even tablets, reducing the need for IT to purchase new and expensive hardware.
Security: In a VDI environment, data lives on the server rather than the end client device. This serves to protect data if an endpoint device is ever stolen or compromised.
Centralized management: VDI’s centralized format allows IT to easily patch, update or configure all the virtual desktops in a system.

Categories
artificial intelligence Tech. Trends

IT Simplified : Mixed Reality

Mixed reality is the next wave in computing followed by mainframes, PCs, and smartphones as per Microsoft. It liberates us from screen-bound experiences by offering instinctual interactions with data in our living spaces and with our friends. Online explorers, in hundreds of millions around the world, have experienced mixed reality through their handheld devices. Mobile AR offers the most mainstream mixed reality solutions today on social media. People may not even realize that the AR filters they use on Instagram are mixed reality experiences.

Categories
artificial intelligence

Dells New Virtualisation Suite targets SMBs

Dell has announced the launch of new appliances, thin clients, and software solutions for its Desktop Virtualisation Suite.

ls_wyse_vdi

Dell has launched a wide variety of products on the Desktop Virtualisation Suite. These new products are aimed for its customers to deploy, configure, and manage Virtual Desktop Infrastructure (VDI) while moving towards the digital transformation path.

Categories
Tech. Trends

How Artificial Intelligence Will Transform Health Industry?

In today’s post-Electronic Health Record(EHR) health environment, the amount of data generated by digitization is staggering. Dozens of systems feed data across healthcare organizations daily, and IDC predicts that health data volumes will continue to grow at a rate of 48% annually. Yet, despite advances toward becoming a data-rich and data-driven industry, medical errors are still the third-leading cause of death in the US alone.

 

data-ai-healthcare