Categories
artificial intelligence Service software

IT Simplified: AIOps 

Artificial intelligence for IT operations (AIOps) is an umbrella term for the use of big data analytics, machine learning (ML) and other AI technologies to automate the identification and resolution of common IT issues. The systems, services and applications in a large enterprise — especially with the advances in distributed architectures such as containers, microservices and multi-cloud environments — produce immense volumes of log and performance data that can impede an IT team’s ability to identify and resolve incidents. AIOps uses this data to monitor assets and gain visibility into dependencies within and outside of IT systems.

Categories
AI Chatbots for Banking artificial intelligence computing Emails security software

IT Simplified: Generative AI

What is Generative AI?

Generative artificial intelligence (AI) algorithms, such as ChatGPT, help to create diverse content like audio,images and videos,code and simulations. By training on existing data, Generative AI identifies patterns and compiles them into a model. Despite lacking human-like thinking abilities, the data and processing power of AI models allow them to recognize its patterns.

Categories
computing software

IT Simplified: Stackable Switches

What are Stackable Switches?

In NETWORKING, the term “stackable switches” refers to a group of physical switches that have been cabled and grouped in one single logical switch. Over the years, stacking features have evolved from a premium (and costly feature) to a core capability of many enterprise-grade switches (and also in several SMB models).

It’s the opposite approach of a modular switch, where you have a single physical chassis with several slots and modules to grow your switch, used typically, at least in the past, in core switches. Both stackable and modular switches can provide a single management and control plane or at least a single configurable logical switch, with some kind of redundancy if you lose stackable switches or a module. Having a single logical switch, with better reliability, makes it easy to translate the logical network topology in physical topology.

What are Stacking Technologies?

In stackable switches, we usually build the stack with cables that connect all the switches in a specific topology. We connect those cables to specific ports of the switches, depending on the type of stacking.

  1. Backplane stacking (BPS), where specific stacking modules (usually on the back of the switch) are with specific cables (depending on the vendor).
  2. Front-plane stacking (FPS)- VSF, standard Ethernet ports to build the stack, using standard Ethernet cables.

The stacking topology also define the resiliency of the stacked solution, you can have typically different kind of cabling options (depending on the switch vendor and models):

  1. Daisy chain or Bus topology do not build switch stacks because it does not provide the desired level of resiliency.
  2. Ring or redundant dual ring provide resiliency, but with more than two switches the packet paths can be not optimal
  3. Mesh or full mesh provide higher resiliency and also optimal packet paths

To increase the resiliency of stacked switches, there are different solutions based on the concept of a “virtual chassis” with separated management and control planes. Usually, high-end switch models typically implement those solutions.

  1. Backplane stacking (BPS)-Vendors utilize specific stacking modules located on the back of the switch, along with specific cables.
  2. Front-plane stacking (FPS)-In VSF (Virtual Switching Framework), standard Ethernet ports hep to build the stack . This method involves using standard Ethernet cables.

Advantages of Stackable switches :

  1. Management Pane: Logical switch view with a single management interface, which  makes the management and operational tasks very easy. By enabling link aggregation between ports of separate physical switches in the same stack, it enhances bandwidth for downstream links. It simplifies network design by treating “multiple cables” across switches as one logical link using link aggregation solution.
  2. Less Expensive: They offer a cost-effective alternative to modular switches, while still delivering comparable scalability and improved flexibility. Resiliency and performance can be different (worse or better) depending on the implementation.
  3. Flexibility: You can typically mix various port speeds and media types, as well as different switch models with varying capabilities. For example, you can combine switches with PoE functions along with other models.

Disadvantages of Stackable switches :

  1. Performance: For SMB use cases, the stack ports and cable speed are enough to provide high bandwidth and low latency. But when speed increases or the stack expands you may increase the latency and decrease the overall performance.
  2. Stability: The stackable switch market is very mature and relatively stable. However, each vendor adds its unique set of features and functionalities. Different vendors utilize different types of connectors, cables and software for their stackable switches. This causes requirements to use the same product line of switches to take advantage of stacking (not necessarily the same model, because, for example, in Aruba 3810 Switch Series you can mix different models in the same stack).
  3. Resiliency: Depending on the stacking topology, if you have some faults your overall stack may not be operating correctly anymore. So be sure to choose the best topology and ensure higher resiliency on each stack member. For example, using dual power supplies to ensure hardware redundancy. The single management or control plane may also reduce the overall resiliency, but the problem is similar also on modular switches.
  4. Manageability: The single management interface is great, but there are also some drawbacks.Expanding an existing stack could cause an extended service disruption, such as when we reboot all the switches to add a stack member or due to a power failure. Second, removing a switch from a stack could be tricky or require a complex process. Last but not least, upgrading the firmware on all the stack members requires a complete reboot of all the switches.

Click for more IT-Related Content

Categories
security servers Service software

IT Simplified: DMARC

What is DMARC?

Domain-based Message Authentication, Reporting & Conformance (DMARC) is an open email authentication protocol that provides domain-level protection of the email channel. DMARC authentication detects and prevents email spoofing techniques used in phishing, business email compromise (BEC) and other email-based attacks.
DMARC, the sole widely adopted technology, enhances the trustworthiness of the “from” domain in email headers by leveraging existing standards.
The domain owner can establish a DMARC record in the DNS servers, specifying actions for unauthenticated emails.

To understand DMARC it is also important to know a few other mail authentication protocols  specifically SPF and DKIM. SPF Organizations can authorize senders within an SPF record published in the Domain Name System (DNS).
The record contains approved sender IP addresses, including those authorized to send emails on behalf of the organization. Publishing and checking SPF records provide a reliable defense against email threats that falsify “from” addresses and domains.
DKIM is an email authentication protocol enabling receivers to verify if an email was genuinely authorized by its owner. It allows an organization to take responsibility for transmitting a message by attaching a digital signature to it. Verification is done through cryptographic authentication using the signer’s public key published in the DNS. The signature ensures that parts of the email have not been modified since the time the digital signature was attached.

How DMARC works

How does DMARC Work?


To pass DMARC authentication, a message must successfully undergo SPF and SPF alignment checks or DKIM and DKIM alignment checks. If a message fails DMARC, senders can instruct receivers on what to do with that message via a DMARC policy. There are three DMARC policies the domain owner can enforce: none (the message is delivered to the recipient and the DMARC report is sent to the domain owner), quarantine (the message is moved to a quarantine folder) and reject (the message is not delivered at all).

The DMARC policy of “none” is a good first step. This way, the domain owner can ensure that all legitimate email is authenticating properly. The domain owner receives DMARC reports to help them make sure that all legitimate email is identified and passes authentication. Once the domain owner is confident they have identified all legitimate senders and have fixed authentication issues, they can move to a policy of “reject” and block phishing, business email compromise, and other email fraud attacks. As an email receiver, an organization can ensure that its secure email gateway enforces the DMARC policy implemented to the domain owner.

What is DMARC in Marketing Cloud?

DMARC can be used by email service providers and domain owners to set policies that limit the usage of their domain. One such policy is restricting the domain’s usage in “from” addresses, which effectively prohibits anyone from using the domain in the “from” field except when using the provider’s webmail interface. any email service provider or domain owner can publish this type of restrictive DMARC policy can be published by Having a powerful CLOUD SERVICES is very important as will protect employees against inbound email threats.

Points to note while authenticating DMARC:

  • Due to the volume of DMARC reports that an email sender can receive and the lack of clarity provided within DMARC reports, fully implementing DMARC authentication can be difficult.
  • DMARC parsing tools can help organizations make sense of the information included within DMARC reports.
  • Additional data and insights beyond what’s included within DMARC reports help organizations to identify email senders faster and more accurately. This helps speed up the process of implementing DMARC authentication and reduces the risk of blocking legitimate email.
  • Organizations can create a DMARC record in minutes and start gaining visibility through DMARC reports by enforcing a DMARC policy of “none.”
  • By properly identifying all legitimate email senders – including third-party email service providers—and fixing any authentication issues, organizations should reach a high confidence level before enforcing a DMARC policy of “reject”.

Click for more IT-related content

Categories
cloud computing software

IT Simplified : Containers and their Benefits

What is a Container?

Container is a software solution that wraps your software process or microservice to make it executable in all computing environments. In general, you can store all kinds of executable files in containers, for example, configuration files, software code, libraries, and binary programs.

By computing environments, we mean the local systems, on-premises data centres , and cloud platforms managed by various service providers. ‍Users can access them from anywhere.

However, application processes or microservices in cloud-based containers remain separate from cloud infrastructure. Picture containers as Virtual Operating Systems that wrap your application so that it is compatible with any OS. As the application is not bound to a particular cloud, operating system, or storage space, containerized software can execute in any environment.

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application:– code, runtime, system tools, system libraries and settings. All Google applications, like GMail and Google Calendar, are containerized and run on their cloud server.

A typical container image, or application container, consists of:

  • The application code
  • Configuration files
  • Software dependencies
  • Libraries
  • Environment variables

Containerization ensures that none of these stages depend on an OS kernel. So, containers do not carry any Guest OS with them the way a Virtual Machine must. Containerized applications are tied to all their dependencies as a single deployable unit. Leveraging the features and capabilities of the host OS, containers enable these software apps to work in all environments.

What Are the Benefits of A Container?

Container solutions are highly beneficial for businesses as well as software developers due to multiple reasons. After all, containers technology has made it possible to develop, test, deploy, scale, re-build, and destroy applications for various platforms or environments using the same method. Advantages of containerization include:

  • Containers require fewer system resources than virtual machines as they do not bind operating system images to each application they store.
  • They are highly interoperable as containerized apps can use the host OS.
  • Optimized resource usage as container computing lets similar apps share libraries and binary files.
  • No hardware-level or implementation worries since containers are infrastructure-independent.
  • Better portability because you can migrate and deploy containers anywhere smoothly.
  • Easy scaling and development because containerization technology allows gradual expansion and parallel testing of apps.
Categories
servers Service software Storage

IT Simplified: Software Defined Data Center (SDDC)

What is an SDDC?

An SDDC is a traditional data center facility where organizational data, applications, networks, and infrastructure are centrally housed and accessed. It is the hub for IT operations and physical infrastructure equipment, including servers, storage devices, network equipment, and security devices.

In contrast, a software-defined data center is an IT-as-a-Service (ITaaS) platform that services an organization’s software, infrastructure, or platform needs. An SDDC can be housed on-premise, at an MSP, and in private, public, or hosted clouds. (For our purposes, we will discuss the benefits of hosting an SDDC in the cloud.) Like traditional data centers, SDDCs also host servers, storage devices, network equipment, and security devices.

Here’s where the differences come in.

Unlike traditional data centers, an SDDC uses a virtualized environment to deliver a programmatic approach to the functions of a traditional data center. SDDCs rely heavily on virtualization technologies to abstract, pool, manage, and deploy data center functions. Like server virtualization concepts used for years, SDDCs abstract, pool, and virtualize all data center services and resources in order to:

  1. Reduce IT resource usage
  2. Provide automated deployment and management
  3. Increased flexibility
  4. Business agility

Key SDDC Architectural Components include:

  • Compute virtualization, where virtual machines (VMs)—including their operating systems, CPUs, memory, and software—reside on cloud servers. Compute virtualization allows users to create software implementations of computers that can be spun up or spun down as needed, decreasing provisioning time.
  • Network virtualization, where the network infrastructure servicing your VMs can be provisioned without worrying about the underlying hardware. Network infrastructure needs—telecommunications, firewalls, subnets, routing, administration, DNS, etc.—are configured inside your cloud SDDC on the vendor’s abstracted hardware. No network hardware assembly is required.
  • Storage virtualization, where disk storage is provisioned from the SDDC vendor’s storage pool. You get to choose your storage types, based on your needs and costs. You can quickly add storage to a VM when needed.
  • Management and automation software. SDDCs use management and automation software to keep business critical functions working around the clock, reducing the need for IT manpower. Remote management and automation is delivered via a software platform accessible from any suitable location, via APIs or Web browser access.

What is the difference between SDDC and cloud?

A software-defined data center differs from a private cloud, since a private cloud only has to offer virtual-machine self-service, beneath which it could use traditional provisioning and management. Instead, SDDC concepts imagine a data center that can encompass private, public, and hybrid clouds.

Click here for more content like this

Categories
Service software

IT Simplified: Natural language processing

Natural language processing (NLP) is an area of computer science and artificial intelligence concerned with the interaction between computers and humans in natural language. The ultimate goal of NLP is to help computers understand language as well as we do. It is the driving force behind things like virtual assistants, speech recognition, sentiment analysis, automatic text summarization, machine translation and much more.

Data generated from conversations, declarations or even tweets are examples of unstructured data. Unstructured data doesn’t fit neatly into the traditional row and column structure of relational databases, and represent the vast majority of data available in the actual world  Nowadays it is no longer about trying to interpret a text or speech based on its keywords (the old fashioned mechanical way), but about understanding the meaning behind those words (the cognitive way) 

It is a discipline that focuses on the interaction between data science and human language, and is scaling to lots of industries. Today NLP is booming thanks to the huge improvements in the access to data and the increase in computational power, which are allowing practitioners to achieve meaningful results in areas like healthcare, media, finance and human resources, among others.

Use Cases of NLP

In simple terms, NLP represents the automatic handling of natural human language like speech or text, and although the concept itself is fascinating, the real value behind this technology comes from the use cases.

NLP can help you with lots of tasks and the fields of application just seem to increase on a daily basis. Let’s mention some examples:

  • NLP enables the recognition and prediction of diseases based on electronic health records and patient’s own speech. This capability is being explored in health conditions that go from cardiovascular diseases to depression and even schizophrenia. For example, Amazon Comprehend Medical is a service that uses NLP to extract disease conditions, medications and treatment outcomes from patient notes, clinical trial reports and other electronic health records.
  • Organizations can determine what customers are saying about a service or product by identifying and extracting information in sources like social media. This sentiment analysis can provide a lot of information about customers choices and their decision drivers.
  • Companies like Yahoo and Google filter and classify your emails with NLP by analyzing text in emails that flow through their servers and stopping spam before they even enter your inbox.
  • Amazon’s Alexa and Apple’s Siri are examples of intelligent voice driven interfaces that use NLP to respond to vocal prompts and do everything like find a particular shop, tell us the weather forecast, suggest the best route to the office or turn on the lights at home.
Categories
servers software Storage

IT Simplified: Business Continuity and Disaster Recovery

A business continuity and disaster recovery plan is a broad guide designed to keep a business running, even in the event of a disaster. This plan focuses on the business as a whole, but drills down to specific scenarios that might create operational risks. With business continuity planning, the aim is to keep critical operations functioning, so that your business can continue to conduct regular business activities even under unusual circumstances.

When followed correctly, a business continuity plan should be able to continue to provide services to internal and external stakeholders, with minimal disruption, either during or immediately after a disaster. A comprehensive plan should also address the needs of business partners and vendors.

A disaster or data recovery plan is a more focused, specific part of the wider business continuity plan. The scope of a disaster recovery plan is sometimes narrowed to focus on the data and information systems of a business. In the simplest of terms, a disaster recovery plan is designed to save data with the sole purpose of being able to recover it quickly in the event of a disaster. With this aim in mind, disaster recovery plans are usually developed to address the specific requirements of the IT department to get back up and running—which ultimately affects the business as a whole.

Categories
computing software

IT Simplified: Remote Display Technologies

Remote access technology refers to any IT toolset used to connect to, access, and control devices, resources, and data stored on a local network from a remote geographic location. 

This makes remote access crucial for businesses of all sizes which have not moved to a cloud-first model, or which require access to on-premises machines or resources. Three of the most common remote access technologies – Remote Desktop Services, Remote Access Software, and Virtual Private Networks – are examined in brief.

Categories
software

Open Source Software

Open Source Software

Open source software is code that is designed to be publicly accessible—anyone can see, modify, and distribute the code as they see fit.

Open source software is developed in a decentralized and collaborative way, relying on peer review and community production.

Open source software is often cheaper, more flexible, and has more longevity than its proprietary peers because it is developed by communities rather than a single author or company.