Monday, August 8, 2016

HYPERSCALE COMPUTING

Hyperscale computing is a distributed computing environment in which the volume of data and the demand for certain types of workloads can increase exponentially yet still be accommodated quickly in a cost-effective manner.

Hyperscale data centers, which are often built with stripped down commercial off the shelf (COTS) computing equipment, can have millions of virtual servers and accommodate increased computing demands without requiring existing physical space, cooling or electrical power. The savings in hardware can pay for custom software to meet business needs. In such a scenario, the total cost of ownership (TCO) is typically measured in terms of high availability (HA) and the unit price for delivering an application and/or data.

Hyperscale computing is often associated with cloud computing and the very large data centers owned by Facebook, Google and Amazon. There is a lot of interest in hyperscale computing right now because the open source software that such organizations have developed to run their data centers is expected to trickle down to smaller organizations, helping them to become more efficient, use less power and respond quickly to their own user's needs.

Thursday, July 7, 2016

DEEP WEB

The deep Web, sometimes called the invisible Web, is the large part of the Internet that is inaccessible to conventional search engines. Deep Web content includes email messages, chat messages, private content on social media sites, electronic bank statements, electronic health records (EHRs) and other content that is accessible over the Internet but is not crawled and indexed by search engines like Google, Yahoo, Bing or DuckDuckGo.

It is not known how large the deep Web is, but many experts estimate that search engines crawl and index less than 1% of all the content that can be accessed over the Internet. That part of the Internet which is crawled and indexed by search engines is sometimes referred to as the surface Web.

The reasons for not indexing deep Web content are varied. It may be that the content is proprietary, in which case the content can only be accessed by approved visitors coming in through a virtual private network (VPN). Or the content may be commercial, in which case the content resides behind a member wall and can only be accessed by customers who have paid a fee. Or perhaps the content contains personal identifiable information (PII), in which case the content is protected by compliance regulations and can only be accessed through a portal site by individuals who have been granted access privileges. When mashups have been generated on the fly and components lack a permanent uniform resource location (URL), they also becomes part of the deep Web.

The term "deep Web" was coined by BrightPlanet in a 2001 white paper entitled 'The Deep Web: Surfacing Hidden Value' and is often confused in the media with the term "dark Web." Like deep Web content, dark Web content cannot be accessed by conventional search engines, but most often the reason dark Web content remains unaccessible to search engines is because the content is illegal.

Thursday, June 2, 2016

SECURITY BY DESIGN

Security by design is an approach to software and hardware development that seeks to make systems as free of vulnerabilities and impervious to attack as possible through such measures as continuous testing, authentication safeguards and adherence to best programming practices.

An emphasis on building security into products counters the all-too-common tendency for security to be an afterthought in development. Addressing existing vulnerabilities and patching security holes as they are found can be a hit-and-miss process and will never be as effective as designing systems to be as secure as possible from the start.

Security by design is rapidly becoming crucial in the rapidly developing Internet of Things (IoT) environment, in which almost any conceivable device, object or entity can be given a unique identifier (UID) and networked to make them addressable over the Internet. One of the major challenges of IoT security is the fact that security has not traditionally been considered in product design for networking appliances and objects that have not traditionally been networked.

The security by design model contrasts with less rigorous approaches including security through obscurity, security through minority and security through obsolescence.

Monday, May 9, 2016

CLOUD BACKUP

Cloud backup is a strategy for copying data to a server at a remote data center so that it will be preserved in case of equipment failure or other catastrophe. The off-site server may be hosted in a proprietary, private cloud or it may hosted by a public cloud service provider who charges the customer a fee for storing and maintaining the backup. Cloud backup is also known as online backup.

Online backup systems in a public cloud typically run on a schedule that is determined by the level of service the customer has purchased. If the customer has contracted for daily backups, for instance, then the application collects, compresses, encrypts and transfers data to the service provider's servers every 24 hours. To reduce the amount of bandwidth consumed and the time it takes to transfer files, the service provider might only provide incremental backups after the initial full backup.

Third-party cloud backups have gained popularity with small offices and home users because the process is convenient. Capital expenditures for additional hardware are not required and backups can be run dark, which means they can be run automatically without manual intervention.

In the enterprise, cloud backup services are primarily being used for archiving non-critical data only. Traditional backup is a better solution for critical data that requires a short recovery time objective (RTO) because there are physical limits for how much data can be moved in a given amount of time over a network. When a large amount of data needs to be recovered, it may need to be shipped on tape or some other portable storage media.

Friday, April 15, 2016

CONTAINERS AS A SERVICE (CaaS)

Containers as a service (CaaS) is a form of container-based virtualization in which container engines, orchestration and the underlying compute resources are delivered to users as a service from a cloud provider. In some cases, CaaS is also used to describe a cloud provider's container support services.

With CaaS, users can upload, organize, run, scale, manage and stop containers using a provider's API calls or web portal interface. As is the case with most cloud services, users pay only for the CaaS resources - such as compute instances, load balancing and scheduling capabilities -- that they use.

Within the spectrum of cloud computing services, CaaS falls somewhere between Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). However, CaaS is most commonly positioned as a subset of IaaS. The basic resource for CaaS is a container, rather than a virtual machine (VM) or a bare metal hardware host system, which are used to support IaaS environments. However, the container can run within a VM or on a bare metal system.

Public cloud providers including Google, Amazon Web Services (AWS), IBM, Rackspace and Joyent all have some type of CaaS offering. For example, AWS has its Amazon EC2 Container Service (ECS), a high-performance container management service for Docker containers on managed Amazon EC2 instances. Amazon ECS eliminates the need for users to have in-house container or cluster management resources. Google's Container Engine service offers similar cluster management and orchestration capabilities for Docker containers.

The key difference between providers' CaaS offerings is typically the container orchestration platform, which handles key tasks, such as container deployment, cluster management, scaling, reporting and lifecycle management. CaaS providers can use a variety of orchestration platforms, including Google Kubernetes, Docker Machine, Docker Swarm, Apache Mesos, fleet from CoreOS, and nova-docker for OpenStack users.

CaaS offerings are usually used by application developers deploying new applications.

Friday, March 11, 2016

NETWORK FUNCTIONS VIRTUALIZATION (NFV)


Network functions virtualization an initiative to virtualize the network services that are now being carried out by proprietary, dedicated hardware. If successful, NFV will decrease the amount of proprietary hardware that's needed to launch and operate network services.
The goal of NFV is to decouple network functions from dedicated hardware devices and allow network services that are now being carried out by routers, firewalls, load balancers and other dedicated hardware devices to be hosted on virtual machines (VMs). Once the network functions are under the control of a hypervisor, the services that once require dedicated hardware can be performed on standard x86 servers.
This capability is important because it means that network administrators will no longer need to purchase dedicated hardware devices in order to build a service chain. Because server capacity will be able to be added through software, there will be no need for network administrators to overprovision their data centers which will reduce both capital expenses (CAPex) and operating expenses (OPex). If an application running on a VM required more bandwidth, for example, the administrator could move the VM to another physical server or provision another virtual machine on the original server to take part of the load. Having this flexibility will allow an IT department to respond in a more agile manner to changing business goals and network service demands.
Network Functions Virtualization is different from software-defined networking but is complementary to it; when SDN runs on the NFV infrastructure, the SDN forwards the data packets from one network device to another while the network routing (control) functions run on a virtual machine in, for example, a rack mount server. The NFV concept, which was presented by a group of network service providers at the Software Defined Network (SDN) and OpenFlow World Congress in October 2012, is being developed by the ETSI Industry Specification Group (ISG) for NFV. 

Tuesday, February 9, 2016

SOFTWARE-DEFINED EVERYTHING(SDE)

Software-defined everything (SDE) is an umbrella term that describes how virtualization and abstracting workloads from the underlying hardware can be used to make information technology (IT) infrastructures more flexible and agile.
The software-defined data center (SDDC) is one in which all elements of the data center infrastructure -- including networking, storage, CPU and security -- are delivered as a service. An administrator can deploy, provision, configure and manage the infrastructure through software and automate as much work as possible. Many technologies fall the SDE umbrella, including:
  • Software-defined networking (SDN) - abstracts network architecture to make network devices programmable, allowing the administrator to quickly respond to changing business requirements.

  • Software-defined storage (SDS) - decouples tasks from physical storage hardware, allowing the administrator to pool and manage storage resources through policies and administered configurations.

  • Software-defined infrastructure (SDIC) - an analytics-driven approach to balancing the resources that application programs require in virtual and cloud computing environments.  Also called composable infrastructure.