What is Node.js

Node JS

Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine

V8 engine compiles JavaScript code directly to machine code before executing it. It does not use traditional flow with an interpreter or compiler.

Node.js is an asynchronous event driven JavaScript runtime

Asynchronous event driven model: The model in which the processing or execution of a piece of code occurs as and when the event happens, rather than waiting for the previous event to complete.

Consider clientA requested for resource1. While clientA is being servered, another client: clientB has requested for resource2. With traditional server software, clientB is served only after clientA is served.

What if clientA takes a very long time? All the clients that come after clientA has to wait for that long and is not acceptable in realtime.

Which is why Node.js is built as event driven model. While clientA is being served, the control does not wait for the request to be completed. It delegates the work of fetching the resource. Meanwhile it servers other clients. When the requested resource is ready, clientA is served with the resource.

Node.js is open-source

Node.js’ source code, blueprint or design can be used, modified and/or shared under defined terms and conditions. This allows end users and commercial companies to review and modify the source code, blueprint or design for their own customization, curiosity or troubleshooting needs.

Node.js is cross-platform

Node.js is implemented on multiple computing platforms. Node.js can run on Microsoft Windows, Linux, and macOS.

Node.js is written in C, C++, JavaScript

Some parts of Node.js are written in C which need most performance. Node.js C++ implements some other modules and also the integration of V8 engine. JavaScript part helps with the inbuilt modules.

Node.js is licensed under BSD

BSD licenses impose minimal restrictions on the use and distribution of Node.js.

Other Names

Node.js is often called using Node, Node JS, NodeJS.

Node.js Resources

Official Node.js Site

Node.js Tutorial

Node.js Interview Questions

How artificial intelligence could take over jobs

Artificial intelligence is on rise now a days. There is a lot of research going on and many advancements are happening. Artificial intelligence has certainly come to the field of Analytics. There are many jobs based on a set of rules a person has to follow on a daily basis. The artificial Intelligence and cognitive systems have become smart to act based on a set of rules. Based on the latest advancements in machine learning algorithms artificial intelligence applications have come to the level of average IQ human being.

For a single task, although the artificial intelligence applications have reached far beyond the intelligence of human beings, when there is a good mix of tasks artificial intelligence applications are not performing up to the mark. But in sometime which is not very far, these applications with inbuilt intelligence are going to  be replaced in place of human beings.

Many companies are working towards the realisation of these applications with intelligence built within them. Only the leaders in this race are going to survive the competition. Once a leading companies take the lead most of the companies that are performing below the markets average shall dissolve.

What do you know about Apache Spark ?

Apache Spark is an open-source cluster-computing framework. Originally developed at the University of California, Berkeley’s AMPLab, the Spark codebase was later donated to the Apache Software Foundation, which has maintained it since.

Apache Spark’s story

In order to process and analyze huge amounts of data very efficiently, Apache Hadoop saw the need for a new engine called MapReduce. And soon MapReduce has become the only way of data processing and analysis with Hadoop Ecosystem. Being the only one of a kind, it influenced communities to develop new engines to process big data. This led to the evolution of Spark at Berkeley AMPLab. The developers at Berkeley AMPLab decided to take the benefit of already established big data open community. So they donated the codebase to Apache Software Foundation and Apache Spark is born.

What does Apache Spark comprise of ?

Before going into the discussion of what Spark can do, lets have a quick look into what Spark has inside it. Excluding the Spark Core, Apache Spark has four libraries that address four areas. They are :

  1. Spark SQL
  2. Spark Streaming
  3. Spark Machine Learning library (also called as Spark MLlib)
  4. GraphX

What can Apache Spark do ?

Now we know what Spark has, let us see what Spark can do.

  1. Unlike Hadoop, Spark can process data in mini-batches and perform transformations.
  2. With the help of Spark’s distributed machine learning framework, machine learning tasks could run on Spark cluster with commodity hardware.
  3. Similarly, graph processing could also be done using the distributed framework.
  4. Structured and semi-structured data could be processed using SQL component of Apache Spark.

References to learn Apache Spark

If you are interested in learning Apache Spark, here are few of the useful links that will help you get started with. Feel free to get your hands dirty.

  1. Apache Spark Official by Apache Software Foundation
  2. Apache Spark Tutorial by TutorialKart

How Apache Kafka is helping Industry

Apache Kafka is an open-source stream processing platform developed by the Apache Software Foundation written in Scala and Java. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.

Apache Kafka has becoming popular in industry with the rise of stream processing. Many of the existing organisations are looking forward to include Kafka in their new projects, while some are trying to incorporate Kafka into their existing applications.

Currently Kafka is being used for :

  • Application Monitoring
  • Data Warehousing
  • Asynchronous Applications
  • Recommendation Engines in Online Retail
  • Dynamic Pricing Applications
  • IOT (Internet Of Things)

What is Industry telling about Kafka?

  1. Kafka is helping applications to work in a loosely coupled manner.
  2. Kafka is handling stream processing and thus became the underlying data infrastructure.
  3. Real-time processing of high volumes of data.
  4. Improvement in the Application Scalability.

Other References

If you are interested to learn Apache Kafka, you may refer following links.