IoT Device Communication protocols

It is essential to have secure, optimized data delivery between IoT Devices, Gateway and cloud edge. The interaction between IoT endpoints follow M2M communications. The protocol must be reliable, stable, secure and most importantly it should enable seamless, real time data transfer with less over-head. There are variety of application, data transfer protocol exists in IoT ecosystem. The article briefly explains the protocols and clearly points out which one to choose based on the scenario and use cases.

Requirement of IoT protocols

As many protocols are evolving in IoT space to gather, transmit and transport data from M2M. The protocols should satisfy the requirement for efficient and effective data transmission and realization.
  • Protocols should support transferring information from one to many  
  • Protocols should have the ability to listen for the events and react
  • Able to transfer small payload streams quickly
  • Able to sustain and transfer information in low bandwidth network environments.
  • Support power constrained, processing constrains devices/sensors
  • Support authentication and transport level security
  • Able to deliver messages in near real time and real time
  • Ensure guaranteed message delivery and message persistence

IoT Network Stack


The IoT stack comprises diverse of protocols for communication from data collection, package, transfer and control.
  • Data Collection – Data collected from multiple sensors, actuators
  • Data Aggregation – Aggregate the data collected from multiple sources.
  • Data Assessment – Data assessed, noise filtered and remove the unnecessary data
  • Data Transmission - The assessed data transmitted to the cloud/servers
  •  Data Response - The machine receives the response back from the cloud/servers

Application protocols

IoT applications use variety of application protocols. Following are some of the key protocols used in IoT applications.

HTTP/HTTPS:

HTTP (Hyper Text Transport Protocol) is stateless protocol and one of the widely used protocol in IoT irrespective of its non-persistence connection and overhead to transmit IoT data. connection between client (web user) and web server. It is the common used protocol for the Internet and one of the protocol used in IoT applications especially when traditional system/devices connects with the IoT ecosystem.HTTP is not optimized for constrained device communication. HTTP does not have quick delivery and enhanced QoS delivery. It is mostly suited for IoT device/sensors which can initiate connections to a web server but does not want back channel communication

 Key merits:
·       Stateless nature reduces the burden in server computing and memory environment
Limitations:
·       Non-persistent connections
·       Request and response model not feasible in many IoT use cases
·       Bulky header for each request/response
·       Continuous polling required for back channel communication

MQTT:

MQTT (Message Queuing telemetry Transport) specially designed for machine-to-machine (M2M) communication and IoT connectivity. MQTT is the light-weight protocol widely used to send frequency message with less payload. It is used by embedded monitoring devices, Sensors for effective transmission of messages. Main functionalities are
  • Topics – Decide which topic to exchange messages between clients.
  • Publish/Subscribe – Send and receive messages on specific topics
  • Messages – Packaged message contain payload.
  • Broker – Receive, filter, route and send messages. Many brokers available .Mosquito ,RabbitMQ are widely used brokers



Architecture:


QoS(Quality of Service) : support different Quality of Service level.
·       QoS:0 ->At most once
·       QoS:1 ->At least once
·       QoS:2 ->Exactly once

Key merits:
1. Asynchronous communication of events
2.Low over heard message transfer
3.Ability to communicate with low bandwidth environment
4.Ability to operate and communicate devices run in low power environment
5.Low foot print

Limitations:
1.Connection need to be opened always which consume more computing power and memory.


AMQP

AMQP (Advanced Message Queuing Protocol) designed for messaging middleware data communication. AMQP uses TCP for reliable delivery and connections are long-lived. It supports secure delivery using TLS(SSL).It is highly reliable and easily interoperable.  It provides reliable queuing, publish/subscribe, routing and secure transmission.

Architecture:





Three main functionalities of Broker
·       Exchange: Receive message from publishers and route to the appropriate message Queue
·       Message Queue: Store messages until consumer consumes the messages
·       Binding: Relationship between exchange and message queue which decides routing criteria

Exchange :
Messages are received and routed to appropriate Queue.  Messages are routed to zero or more queues
  1.       Direct
  2.       Fan-out
  3.       Topic
  4.       Headers

Message Queue:
  1. ·       Store and forward
  2. ·       Private publish Queue
  3. ·       Private subscription Queue

Binding:
Bindings are constructed from commands from the client application (the one owning and using the message queue) to an exchange.
Queue-> BIND -> Messages (Conditions)

Key merits:

  • ·       AMQP gives better reliability and allows asynchronous delivery.
  • ·       Maintain long lived connections
  • ·       Fanouts helps to scale messages and route to multiple components.
  • ·       Can work broker less peer to peer connection mode


Limitations:

  • ·       Heavy weight protocol and not always suitable for IoT applications
  • ·       Computing, power and memory requirements are relatively high compare to lightweight IoT protocols
  • ·       More header size.


             CoAP(Constrained Application Protocol) 

CoAP is a constraint application internet protocol based on HTTP and it designed for constrained devices communication. CoAP is designed to transfer document between client and server. CoAP helps to save header space due to Bitfields and string mappings. The packet package and parsing in CoAP uses minimum resources due to it simple packet structure which is best suited for constrained devices. The operation includes resource discovery, registration and Notification. It uses UDP as underlying transport protocol which leads to consistent performance and real time delivery. The packet reordering and retrieve should be taken care by application. HTTP and CoAP share the REST model for transferring the content between client and server. CoAP is compatible with HTTP but it is specifically  designed for devices with constrained resources like sensors and microcontrollers.

Architecture:




QoS (Quality of Service):
CoAP support two levels of QoS
·       Without Acknowledgement- It sends and forget the message and does not ensure the guaranteed message delivery
·       With Acknowledgement – It sends and confirm the message delivery by receiving acknowledgement.
Key merits:
  • ·       Works with power and processing constraint environments
  • ·       Asynchronous communication
  • ·       Best suited for home device communication
  • ·       Very fast device-to-device communication in UDP.


Limitations:
1.Message unreliability due to UDP. To ensure guaranteed message delivery, method needs to be added in application stack
2.CoAP uses UDP and many devices behind the corporate firewall and enterprise network . The communication blocked by the firewall due to Network Address Translation

Websocket:

Websocket is a bidirectional connection-oriented protocol which use TCP as underlying transfer protocol. It uses HTTP for initial handshake with server and maintain persistence connection between server and client. Bidirectional connection helps invoke clients if any event occurrence at the other end. Browsers support WebSocket to connect with the server and real time data transfer. Web sockets are ideal choice for IoT communication as it needs lot of small data transfer and also require back channel communication. Websockets replace traditional HTTP due to its low overhead and bidirectional nature. It also suitable for ingesting streaming data. It avoids polling or long-polling which requires polling server at regular intervals for new data/event. In WebSocket when new data available it will send message back about data availability

Architecture:




Key merits:
  • ·       Persistent connectivity
  • ·       Minimal header size
  • ·       Bidirectional ,Async and real time
  • ·       Suitable for IoT streaming data transfer


Limitations:

  • ·       Connection keep alive for long time may not be feasible all the time.
  • ·       Not compatible with Load balancers

MQTT over WebSockets 

MQTT widely used for constrained device communication. However, when device wants to communicate MQTT through browser it is required to abstract MQTT messages over websocket. Direct MQTT connection is not possible with browser due to raw TCP connection restriction in browser. MQTT over WebSockets empowers the browser to leverage all MQTT features. It helps to realize many IoT scenarios such as
  • Application to display live sensor/device data
  • Receive alert and notifications
Comparison

HTTP
MQTT
AMQP
CoAP
WebSocket
Model
Request/Response
Point to Point
Publish/Subscribe
Request/Response, publish/Subscribe
Request/ Response
Message payload
large payload
Small payload
Large payload
Small Payload
Small Payload
Header
Medium/large
2 byte
8 byte
4 byte
2 byte
Connection
One to one
One to one, one to many
One to one, one to many
One to one
One to one
Data encryption
SSL/TLS
SSL/TLS
SSL/TLS
DTLS
SSL/TLS
Applications
Device to Cloud
Cloud to Cloud
Device to Device
Device to Cloud
Device to Device
Device to Cloud
Cloud to Cloud
State transfer approach
Device to cloud
Cloud to Cloud
Transport protocol
TCP
TCP
TCP
UDP
TCP
Content discovery
No
No
No
Yes
No
QoS
All messages are served with same QoS
3 levels of QoS
2 levels
2 levels
All messages are served with same QoS
Communication mode
Synchronous
Asynchronous
Asynchronous
Asynchronous
Asynchronous
















Streaming Evolution – File downloading to Adaptive Streaming

Streaming Evolution – File downloading to Adaptive Streaming

This article covers experiencing media content from server to user. There are different ways to access the content. Over the period lot of technologies evolved to efficiently deliver the content to the users.
I have done extensive analysis and research on streaming technologies. On part of this , I want to share the details here.

Content download and streaming are the two possible ways to view the media content. Two ways of downloading can possible namely file download and Progressive download.

File Downloading:

File downloading allows the entire media file to be downloaded using HTTP or FTP from web server and saved in the local device memory. The file names are referred using hyperlink .The downloaded file can be opened using the appropriate media player application to view and display the content. This method is most suitable for smaller size files. For larger size files, user has to wait long time to complete download to view the media content.



Progressive Download
Progressive download is used for file download and render at the same time. The streaming file is downloaded from web server to the client device. As soon as the file downloads starts, client invokes the media player to play after sizable data available in the client play out buffer. The user can store complete data and play the content whenever required without downloading again from the server. The media player stops playback if there is buffer underrun(playback rate exceeds the download rate). The playback will resume after further download the data. Sometimes the buffer overrun happens when the download rate exceeds the playback rate.
Progressive download uses HTTP (Hypertext transport protocol) over TCP (Transport control protocol) . TCP is a reliable protocol optimized for guarantee of delivery, irrespective of file format or size and it controls the actual packet transport over the IP network.  packet retransmission consumes extra bandwidth and time which restricts the real time end user experience. Regardless of bandwidth drop or surge, the video representation remains same for the entire duration. HTTP Web servers keep pushing the data flowing until the download is complete ,it uses the existing web infrastructure and does not require any additional firewall, Network Address Translation(NAT) configurations which are major issues in RTSP/RTP streaming.


Streaming
Streaming is the process of dividing data in the file is broken into small packets that are sent in a steady and continuous flow, as a stream to the end device. As soon as few initial data packets received, the playback starts as the rest of the packets are transferred to the end user's device while playing. The packets are reassembled at the client side based on the sequence number and time stamp. The initial short play out buffer  delay required to accumulate small amount of data in buffer. The client play out buffer makes sure that the playback to continue uninterrupted despite variations in the rate of received rate and network delay.

Application level protocols located on top of transport protocol which are required to deliver application specific data and events .In transport level, UDP/IP and TCP/IP are used in packet switched network to transport the content. The various application protocols such as Real Time Streaming Protocol (RTSP), Real-time Transport Protocol (RTP), HTTP, and Real Time Messaging Protocol (RTMP) are some of the widely used protocols for media file streaming.
Media streaming categorically divided into Live and on demand based on content origin. In on-demand media streaming, the stored encoded media content delivered to the consumer using specific set of protocols. In Live media streaming ,the content  captured on the fly ,encoded and transmitted to the user .Such streaming methods require fast processing capability to encode with minimal latency.



RTP media streaming over UDP

User Datagram Protocol (UDP) is widely used in Packet data network to stream multimedia content because of its flexibility and real time delivery behavior. RTP streaming over UDP widely used in low latency media and entertainment applications such as streaming, video telephony, video conference, set top box application and push-to-talk features. Real-time Transport Protocol (RTP) and Real Time Control Protocol (RTCP) application protocols for payload transmission and control respectively. Generally Real Time Streaming Protocol (RTSP) over TCP is used for session initiation and description even though specification allows RTSP over UDP. Figure 4 shows the communication flow between streaming client and server.
The streaming operation logically divided into three phases.
·       Session description and control
·       Media payload transportation
·       Session quality and feedback


Session Description Protocol (SDP) is a presentation description protocol, describes the session parameters required for initiation, setup and negotiation. It provides the information such as protocol version, session name, network connection details, display orientation, media type, port, protocol, format and session duration. It can be extended to provide new media types and profile information.
RTCP packets are providing significant feedback information to the client and server. The functionality of the RTCP is to monitor the transmission, reception parameters and convey information to all the participants in an on-going session

Interleaving RTP/RTCP over RTSP

Even though RTP and RTCP transmission over UDP gives the real time user experience, the connection between streaming client and server is also not reliable and many times UDP packets are blocked by network firewalls and NAT. Alternatively Packet switched streaming RTSP/RTP solution can also be transmitted over full-duplex TCP connection by interleaving RTP into the RTSP session provided RTSP use TCP for transport.The RTSP connection channel number 0 is used for RTP to transmit the data stream and channel number 1 is used for RTCP to transmit control messages

HTTP Tunnelling
HTTP tunneling is another method to allow RTP/RTSP data to easily pass through firewalls since most of the system allows HTTP traffic to traverse. RTP and RTSP streams are wrapped into HTTP messages and transported over TCP. Receiver unpacks the HTTP packets to regain RTP/RTCP packets. Even though sending streaming data via HTTP is least efficient, it ensures most reliable delivery.

RTMP Streaming

 Real-Time Messaging Protocol (RTMP) is proprietary and stateful Adobe media protocol used for streaming streams over TCP. It permits live and on-demand streaming of Audio, video, data transmission between a Flash media player and a Flash Media Server.  It supports video formats such as  FLV ,H.264 (MP4/MOV/F4V) and Audio formats such as MP3 and AAC (M4A). The RTMP variations  are
·       RTMP - Adobe's Real-Time Message Protocol
·       RTMPS - RTMP over a TLS/SSL connection.
·       RTMPE – RTMP Encrypted version uses Adobe's own security  mechanism.
·       RTMPT- RTMP Tunneled. It is tunneled using HTTP.RTMP data is encapsulated and exchanged via HTTP to  avoid firewall/NAT issues of normal RTMP transfer.
·       RTMPTE – RTMP encrypted tunneled over HTTP

 The Flash media server divides each mediastreams into number of small fragments with different size after the session establishment and begins sending the media as a steady stream of small information packets till the session end.  The receiver and sender dynamically negotiate the size of the fragments to be transmitted.

Adaptive streaming
The basic concept of adaptive streaming is to divide audio, video into number of small chunks for appropriate duration, encoded in different bit rates, stored and delivered to the client using HTTP download.
Microsoft Smooth Streaming, Apple HTTP Live Streaming (HLS), Adobe HTTP Dynamic Streaming (HDS) and MPEG DASH (Dynamic Adaptive Streaming over HTTP) are the frequently used adaptive streaming techniques.

Microsoft Smooth streaming
Smooth streaming server stores manifest files along with the media files. Server has client and server manifest files for each media file. Client initiates the connection request with URL to the server. Server sends manifest file which comprise metadata required by the client to start the session. Server manifest file denoted with .ism extension.
An ismv file comprises video and audio data, or only video data. In Audio video representation, audio track multiplexed into video as ismv file instead of storing in separate file. Each bit representation stored in separate ismv file. An isma file contains only audio and it is required for audio only file streaming.

HTTP Live Streaming (HLS)

 The basic principle of HLS is to divide the overall streaming into number small segmented MPEG2-TS files of HTTP download. Segmentation makes different media representation units and creates the index(m3u8) files. For multiple bit representation, the main m3u8 file contains the entries of sub index m3u8 files to support multiple encodings of the same presentation. HLS server manages with thousands of individual fragments and sends the fragment stream to the client. Each fragment contains Program Association Table (PAT) and a Program Map Table (PMT) at the start along with media data.

HTTP Dynamic Streaming (HDS)

HTTP Dynamic streaming (HDS) is  open standard streaming solution developed by adobe to support for adaptive bit rate HTTP Dynamic  live and on-demand Streaming using HTTP  caching servers and  using a Fragmented MP4 container format .

Dynamic Adaptive Streaming over HTTP (DASH)

A XML-based manifest file describes the media presentation details and playlists similar to smooth streaming. The media segments with various bit representation requested based on the manifest file information . MPD file contains stream information in the beginning and followed by media presentation contents for various time periods. Each media presentation elements contain number of adaptation sets for audio, video stream for specified duration.

Adaptive HTTP streaming uses either MPEG-TS or fMP4(Fragmented MP4) container format.

Content streaming using Cloud infrastructure and services
Cloud infrastructure helps to host media streaming servers and content files . The computing infrastructure helps to create virtual environment and install media servers either custom or from the third party provider. Cloud infrastructure and services helps to
  • Provisioning media server
  • Host content in cloud storage
  • Content encoding/transcoding services
  • Media engine to generate multiple format output
  • Live streaming
  • Edge content delivery
  • Live video analytics

C and C++ Static and Dynamic Code Analysis- Code Quality improvement


Writing good quality code is the goal of all the software developers. However, writing very good quality code following complete rules at the first attempt is difficult. It is essential to use code quality tools, unit test execution and coverage to identity the main vulnerabilities, coverage of the code and fix before release for quality testing. The complexity analysis and quality metrics are important for developers, architects and managers to shape the project to deliverable quality.
There are many open source and commercial code quality tools available in the market. Code quality tools can have categorized into three main categories
1.      Static and Dynamic analysis tool
2.      Code coverage tools
3.      Unit test framework
Static and Dynamic analysis tools:
Static code analysis aims to scan the entire project, directories, files and provides the detailed report about code quality and potential issues such as memory leak, unused variables, dereferencing null pointers, boundary validation etc. It also provides on the fly feedback to the developers about the potential issues and can also provide run time errors and warnings. The tools also generate unique metrics for easy visualization and understanding about the code qualities. Few standard static analysis tools already incorporated some of the Coding standard compliant mechanism such as MISRA C/C++, CWE etc.
I have used many opensource tools in my career for analysing C, C++, Java and JavaScript code. I am going to list out few open source static analysis tools and demonstrate usage one or two familiar tools with an example. I have captured and listed the information from my own study and experimentation. Some information’s may not be 100% accurate or missing 😊.



Static Code analysis – Open Source
Tool
Features
Metrics/Format
Comments/Links
Cppcheck
  Static Code analysis
  Detect undefined behaviors
  Memory leak
  MISRA compliance check
  Bounds checking
  Function usage
  Available as Eclipse and Jenkins plugin
  Use custom config file
  Custom rules using regular expression
  Results categorized as Error, Warning, Performance and Informative
  Report in XML Format
  Can convert from XML to HTML
List of checks:
Plugin available for
Eclipse - Cppcheclipse
Jenkins - Cppcheck Plugin
Sonarcube and SonarLint
  Open source platform for continuous inspection
  Uses pattern matching, dataflow analysis
  SonarLint is an eclipse plugin to connect with SonarQube
  SonarLint needs C/C++ plugin installation in SonarQube for C/C++ code analysis.
  Find code smells, bugs and security vulnerabilities
  parser supporting C89, C99, C11, C++03, C++11, C++14 and C++17 standards
  Buffer check, memory leak, conditions, pointer check.
  Analysis is automatically triggered
  can configure rules
  import Unit Tests Execution Reports
  import Coverage Results
  Activate multithreaded code scan
  CWE Compatible
  Cognitive Complexity
  Visualize the history of a project
  Enforce Quality Gate
  Lines of code
  Duplicates
  Issues/category
  Integrate Unit test coverage
  Import GCOV Coverage 
  Integrate CPPUnit for unit testing
  Bugs
  Vulnerabilities
  Coverage
  Issues categorized as Error, Warning, Performance and Informative
  Memory Leak
  Dead Code
  Logic Flow Error
  Coding convention
  Error Handling
https://www.sonarqube.org/
Clang
  can be run either as a standalone tool or within Xcode
  Fast and light-weight
  built on top of Clang and LLVM
  Core Checkers
  C++ Checkers
  check for unused code
  Nullability Checkers
  insecure API usage and perform checks based on the CERT Secure Coding Standards
  Use of Unix and POSIX APIs
SonarQube C++ plugin

  Adds C++ support to SonarQube with the focus on integration of existing C++ tools
  SonarCFamily for C/C++ - Commercial version
  SonarQube C++ - Community plugin for free

  Support to integrate CppCheck for code analysis
  Support to integrate CppUnit for executing unit tests
  Support to integrate Gcov / gcovr for coverage reports

Valgrind

  Valgrind is an instrumentation framework for building dynamic analysis tools. There are Valgrind tools that can automatically detect many memory management and threading bugs, and profile your programs in detail. You can also use Valgrind to build new tools.
  The Valgrind distribution currently includes six production-quality tools: a memory error detector, two thread error detectors, a cache and branch-prediction profiler, a call-graph generating cache and branch-prediction profiler, and a heap profiler.
  It also includes three experimental tools: a stack/global array overrun detector, a second heap profiler that examines how heap blocks are used, and a SimPoint basic block vector generator.
  six production-quality tools:
   a memory error detector
  two thread error detectors
  a cache and branch-prediction profiler
  a call-graph generating cache
  branch-prediction profiler
  heap profiler.
  It also includes three experimental tools:
  stack/global array overrun detector
  heap profiler that examines how heap blocks are used
  SimPoint basic block vector generator
   



Code coverage Tools
Tool
Features
Metrics/Format
Comments/Links
Covtool
  Open source coverage tool for code analysis
  Dynamic code analysis
  Not maintained for long time
  Total number of (instrumented) lines of code,
  Total number that were executed
Gcov / gcovr
  Code coverage analysis and statement-by-statement profiling tool
  Use in conjunction with GCC to test code coverage
  use gcov along with the other profiling tool, gprof
  Text output with total lines, coverage statistics indicated with summary statistics and lists of uncovered line
  Branch coverage
  XML output that is compatible with the Cobertura code coverage utility.
  HTML output with coverage rates indicated using colored bar graphs

Unit test framework
Tool
Features
Metrics/Format
Comments/Links
CppUnit
  JUnit framework for unit testing
  very familiar for developers who have used JUnit or similar testing tools
  XML output compatible with continuous integration reporting systems
  Automatic testing –XML output and GUI based for supervised tests
Bandit
  Specifically developed for C++11

CppTest
  portable and powerful, yet simple, unit testing framework for handling automated tests in C++
  Focuses on usability & extensibility
  Hierarchical test suits
  Simplicity in use, use one include file and single Suite class in TEST name space
  Rich assertion messages
  Test::TextOutput. This is the simplest of all output handlers. The display mode could be either verbose or terse.
  Test::CompilerOutput. The output is generated in a manner resembling a compiler build log.
  Test::HtmlOutput. Fancy HTML output.
Google Test
  Supports automatic test discovery, a rich set of assertions, user-defined assertions, death tests, fatal and non-fatal failures, various options for running the tests, and XML test report generation.
  Does not support C++11 move semantics
  Test discovery.
  A rich set of assertions.
  User-defined assertions.
  Death tests.
  Fatal and non-fatal failures.
  Value-parameterized tests.
  Type-parameterized tests.
  Various options for running the tests.
  XML test report generation.
CppUTest
  unit xUnit test framework for unit testing and for test-driving your code
  Allow to run tests JUnit style
  Available as Eclipse plugin
Catch
  Fast test setup, ease of use, and detailed reporting
  No external dependencies. As long as you can compile C++11 and have a C++ standard library available.
  Write test cases as, self-registering, functions (or methods, if you prefer).
  JUnit xml output is supported for integration with third-party tools, such as CI servers


Boost Test Library

  Handles exceptions and crashes very well
  Simplify writing test cases by using various testing tools.
  Organize test cases into a test tree.
  Relieve you from messy error detection, reporting duties and framework runtime parameters processing.




The below section describes the setup and usage of SonarQube.
SonarQube :
SonarQube is an open source platform for continuous code quality inspection and perform automatic inspection of code for code check, bug detection, find vulnerabilities. It helps to perform checks on more than 20 programming languages. SonarQube is the server for report hosting and interpreting reports ,metrics generation using appropriate scanner pluing for each language. The following are the key metrics provides by the SonarQube for C/C++ code analysis.
        Bugs and Vulnerabilities
        Code Smells
        Coverage
        Duplications
        Issues:
        Major
        Critical
        Minor
        Blocker
        Directory level issues count
        File level issues count
        Reliability
        Security
        Maintainability
        Coverage
       Lines to cover
       Uncovered lines
       line coverage
        Duplications
        Lines
       Files
        Size
       lines of code
       lines,
       statements,
       functions,
       classes,
       files,
       directories,
       comment lines,
       comment %
        Complexity
       Cyclometric complexity (overall project, component wise and files)
        Issues
       Bugs
       Vulnerabilities
       Code smells
        Quality Gate Set
       Coverage on New Code
       Duplicated Lines on New Code (%)
       Maintainability Rating on New Code
       Reliability Rating on New Code
       Security Rating on New Code

Setup and Run SonarQube Server:
1.  Download SonarQube from https://www.sonarqube.org/downloads/  for Linux.
2. The pre-requstie for SonarQube installation and execution are listed in https://docs.sonarqube.org/display/SONAR/Requirements  . Make sure the appropriate version of open JRE/Open JDK installed.
3. Use the below command in ubuntu to install open JDK and Open JRE version 8.
sudo apt-get install openjdk-8-jdk
sudo apt-get install openjdk-8-jre
4.  set up the Java 8 environment variables and check Java version
sudo apt-get install oracle-java8-set-default
java -version
5. Configure the parameters as described in https://docs.sonarqube.org/display/SONAR/Requirements
6. Install SonarQube in /etc folder and run using the command: sudo ./etc/sonarqube-x.x/bin/linux-x86-32/sonar.sh console
The below message shows the SonarQube server started successfully
jvm 1    | 2018.04.23 11:44:46 INFO  app[][o.s.a.SchedulerImpl] Process[web] is up
jvm 1    | 2018.04.23 11:44:46 INFO  app[][o.s.a.p.ProcessLauncherImpl] Launch process[[key='ce', ipcIndex=3, logFilenamePrefix=ce]] from [/etc/sonarqube-6.7.1]: /usr/lib/jvm/java-8-oracle/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/etc/sonarqube-6.7.1/temp -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -cp ./lib/common/*:./lib/server/*:./lib/ce/*:/etc/sonarqube-6.7.1/lib/jdbc/h2/h2-1.3.176.jar org.sonar.ce.app.CeServer /etc/sonarqube-6.7.1/temp/sq-process8900001930010179462properties
jvm 1    | 2018.04.23 11:44:52 INFO  app[][o.s.a.SchedulerImpl] Process[ce] is up
jvm 1    | 2018.04.23 11:44:52 INFO  app[][o.s.a.SchedulerImpl] SonarQube is up

7. Use URL: http://localhost:9000 to open SonarQube Server dashboard in browser 




8. Login SonarQube Dashboard using default username:admin Password:admin

9. Generate new token from http://localhost:9000/account/security/ and the same token should be used from clients (scanners) to establish connection.

Install C/C++ plugin and Scanners for Static Analysis:
1.      Commercial plugin- Download and install sonarCFamily plugin (http://www.sonarsource.com/products/plugins/languages/cpp/and copy SonarQube Plugin Directory (/etc/sonarqube-x.x/extensions/plugins/) . It requires valid license.
2.       For Community plugin  , download  community sonar-cxx plugin from (https://github.com/SonarOpenCommunity/sonar-cxx ) and copy SonarQube Plugin Directory (/etc/sonarqube-x.x/extensions/plugins/) . It does not require any license
3.      The C++ plugin won't execute any test code or run coverage tool or run static code checkers in SonarQube . It basically helps to interpret the provided reports from the Scanner, generate metrics and loads in the server.
4.      Note that the implementation of sonar-cxx supports multiple sensors such as CPPCheck, PCLint,Valgrind etc.. The complete list of supported sensors implementation can be seen inside plugin sonar-cxx/org/sonar/plugins/cxx/ . It means the project generates report from the tools such as CPPCheck, PCLint,Valgrind etc.., push report to server and the reports are interpreted , validated and metrics generated using sonar-cxx plugin at the server side.
5.      It is the application or project responsibility to generate reports and push it to the SonarQube Server. Application or Project can use Command Line tool (sonarscanner)or IDE plugin (sonarLint for Eclipse) to generate report connect with SonarQube and push .
6.      SonarLint will not work with Community sonar-Cxx Plugin. It will work only with sonarCFamily plugin for C/C++. So SonarScanner is the option for open source
7.      Install SonarScanner Command line interface from https://docs.sonarqube.org/display/SCAN/Analyzing+with+SonarQube+Scanner , Sonar scanner used to scan, connect with sonarqube ,push the report and display the analysis result.
8.      Configure sonar.host.url in sonar-scanner.properties (sonar-scanner-x.x/conf)
#----- Default SonarQube server
sonar.host.url=http://localhost:9000

9.      Update the PATH variable with sonar-scanner installation path.

The setup is fully ready to from the scanner to connect with server and SonarQube to report interpretation, analysis and publishing. But Need report to analysis. How to generate the report? It is required to generate static analysis report at the project level. Now we will see how to generate static analysis report using multiple tools in the project and send to server for analysis.

Integrate Static Analysis, Code coverage and Unit test framework in Project and Generate report:

Here I want to discuss how to integrate few standalone static analysis tools in project , define rules and generate reports.

1.      Capture Compiler Warnings
First , let us start with capturing compiler warning report . Most of the time compiler warning is not taken into account. However less compiler warning defined good code quality. To include all the compiler warnings in the static analysis report, the warning flag of the compiler enabled and all warnings are captured in a file.
1.      Enable -fdiagnostics-show-option flag in GCC build option
2.      Redirect all warning into file (warning.log)
3.      In sonar-scanner.properties file define the following properties
sonar.cxx.compiler.parser=GCC
sonar.cxx.compiler.reportPath=warning.log
The options and procedure is listed in the link https://github.com/SonarOpenCommunity/sonar-cxx/wiki/Compilers 
While running analysis, sonar scanner fetches the file and push to server for analysis

2.      Run CPPCHECK and Capture Static Analysis report
Cppcheck is general purpose static checking tool . It detects many real-time issues with few false positives. It also used for MISRA C compliance check. The complete information on installation and configuration details are available in http://cppcheck.sourceforge.net/
A.     Download and install CPPCHECK in the system (http://cppcheck.sourceforge.net/)
B.     Run $cppcheck  in command line displays the options and information of the tool
C.     To enable all the checks and store the output in the XML, use the below command.
cppcheck -v --enable=all --xml -Isrc  src 2> output/cppcheckreport.xml
It will analyze the src folder and store the XML result in output folder. Enable options for additional checks. The values are performance, portability, information, style,all . Enaling ‘all’ to include all the checks.


3.      Run VERA and Capture Static Analysis report
# Run vera: static code checker focusing on code style issues
sonar_vera:
       bash -c 'find src -regex ".*\.cc\|.*\.hh" | vera++ - -showrules -nodup |& vera++Report2checkstyleReport.perl > $(BUILD_DIR)/vera++-report.xml'

4.      RATS to perform security check and analysis
RATS perform rough analysis on the security problems such as common security related programming errors such as buffer overflows and TOCTOU (Time Of Check, Time Of Use) race conditions. The tool downloaded and installed in the system .The RATS report can be captured in XML using

                 rats -w 3 --xml src > output/rats-report.xml

5.      VALGRIND for memory analysis in program
Valgrind can be used for memory debugging, memory leak detection and profiling. The tool first installed in the system and use hte below command to get valgrind report from program in XML format.
              -valgrind --xml=yes --xml-file=output/valgrind-report.xml   output/program


For more details about installation, usage options refer http://valgrind.org/

6.      Unit Test and Coverage
Unit test and coverage is more critical to detect error during development phase and get the code coverage. Google Test framework is widely used unit test tool for C/C++. The procedure to illustrate the usage of gtest and gcovr are given below
A.     Google Test
Setup and build:
Gtest enables writing independent, reusable and portable C++ unit test cases.  
1.      Download the latest release of gtest from https://github.com/google/googletest
2.      Build using CMake as mentioned here https://github.com/google/googletest/blob/master/googletest/README.md
3.      Simple build without CMake by coping src directory to the local working directory and use make file to compile as below

libgtest.a:
       g++ -I  /gtest/include -I /gtest/src  -c  src/gtest-all.cc -o gtest-all.o
       g++  /gtest/include -I /gtest/src  -g  -c  src/gtest_main.cc -o gtest_main.o
       ar  -r libgtest.a gtest-all.o gtest_main.o

4.      Now gstest library is compiled and ready to write test cases .
Writing Test cases and Execution:
The below example shows the simple example to write test case for simple class methods
Compoment1.cc has simple class and method
//Class
class Test1 {
public:
    int Check(int ,int);
};

// return Sum of two int
int Test1::Check(int a ,int b){
 return (a + b);
}
The test cases are written for the method
libcomponents.a: components.o
       ar -r $@ components.o

#include <gtest/gtest.h>
#include <component1.cc>

namespace {
    class Component1Test : public ::testing::Test {
    protected:
        Test1 test1;
    };
 The source component is build using

  
TEST_F(Component1Test, PositiveNos) {
    ASSERT_EQ(3, test1.Check(1, 2));
    ASSERT_EQ(0, test1.Check(0,0));
    ASSERT_NE(10, test1.Check(10,0));
    ASSERT_NE(21, test1.Check(9,19));
}

TEST_F(Component1Test, NegativeNos) {
    ASSERT_EQ(-8, test1.Check(-8,0));
    ASSERT_EQ(0, test1.Check(5,-5));
   ASSERT_NE(-3, test1.Check(2,-5));
   ASSERT_EQ(-5, test1.Check(-4,-1));
   ASSERT_EQ(-2, test1.Check(-6,2));
}
TEST() and TEST_F() registers the test cases with google test
RUN_ALL_TESTS() registers and runs all the tests in the unit . It can be called from  main()

int main(int argc, char **argv) {
  ::testing::InitGoogleTest(&argc, argv);
  return RUN_ALL_TESTS();
}

test_component1: libcomponents.a test_ Component1Test.o  g++  -lpthread  libgtest.a  libcomponents.a  -pthread –                coverage   -o $@
Run the unit test cases and capture report in XML using
test_component1 --gtest_output=xml:xunit-report.xml



B. gcovr (GCC Code Coverage Report)

Follow instructions to install gcovr from https://pypi.org/project/gcovr/
   Collect the coverage data and store XML format using the command
 gcovr -x -r . > gcovr-report.xml




Retrieve reports, rules loading, and report analysis

Now we have the cppcheck ,rats, vera,gtest and gcovr report in the form of XML. Now running SonarScanner to push all the reports to the SonarQube server and analyse the report using CommunityC/C++ plugin in the server . The Plugin in SonarQube interprets the reports and generate possible metrics.
1. Create sonar-project.properties file in the root directory of the project to be scanned.
2. Add sonar.projectKey=<project_name> and sonar.login=<Generated-SonarQube-securitytoken> if forceauthentication enabled in SonarQube Server settings.
3. The relevant properties (sonar.cxx.cppcheck.reportPath, sonar.cxx.vera.reportPath etc.) for the reports should be defined in the sonar-scanner.properties file.

# required metadata
sonar.projectKey=CxxPlugin:Sample
sonar.projectName=Sample
sonar.projectVersion=0.0.1
sonar.language=c++


# paths to the reports
sonar.cxx.cppcheck.reportPath=build/cppcheck-report.xml
sonar.cxx.coverage.reportPath=build/gcovr-report*.xml
sonar.cxx.coverage.itReportPath=build/gcovr-report*.xml
sonar.cxx.coverage.overallReportPath=build/gcovr-report*.xml
sonar.cxx.valgrind.reportPath=build/valgrind-report.xml
sonar.cxx.vera.reportPath=build/vera++-report.xml
sonar.cxx.rats.reportPath=build/rats-report.xml
sonar.cxx.xunit.reportPath=build/xunit-report.xml

4. To launch analysis and push the report, run sonar-scanner command from the project root.
5. On successful analysis, EXECUTION SUCCESS message displayed.
6.  Open http://localhost:9000 to access the SonarQube Server dashboard and view the results. The generated report from sample project is described as below








PCLint report is not included.



IoT Device Communication protocols It is essential to have secure, optimized data delivery between IoT Devices, Gateway and cloud edge...