Friday 21 July 2017

WHAT IS EMBEDDED VISION

In recent years, a miniaturization trend has been established in many areas of electronics. For example, ICs have become more and more integrated and circuit boards in the electrical industry have become smaller and more powerful. This has also made PCs, mobile phones and cameras more and more compact and powerful. This trend can also be observed in the world of vision technology.
A classic machine vision system consists of an industrial camera and a PC: Both were significantly larger a few years ago. But within a short time, smaller and smaller-sized PCs became possible, and in the meantime, the industry saw the introduction of single-board computers, i.e. computers that can be found on a single board. At the same time, the camera electronics became more compact and the cameras successively smaller. On the way to even higher integration, small cameras without housings are now offered, which can be easily integrated into compact systems.
Due to these two developments, the reduction in size of the PC and the camera, it is now possible to design highly compact camera vision systems for new applications. Such systems are called embedded (vision) systems.

DESIGN AND USE OF AN EMBEDDED VISION SYSTEM

An embedded vision system consists, for example, of a camera, a so-called board level camera, which is connected to a processing board. Processing boards take over the tasks of the PC from the classic machine vision setup. As processing boards are much cheaper than classic industrial PCs, vision systems can become smaller and also more cost-effective. The interfaces for embedded vision systems are primarily USB or BASLER BCON for LVDS.
Basler Camera Distributor in India
Embedded vision systems are used in a wide range of applications and devices, such as in medical technology, in vehicles, in industry and in consumer electronics. Embedded systems enable new products to be created and thereby create innovative possibilities in several areas.

WHICH EMBEDDED SYSTEMS ARE AVAILABLE?

As embedded systems, there are popular single-board computers (SBC), such as the Raspberry Pi®. The Raspberry Pi ® is a mini-computer with established interfaces and offers a similar range of features as a classic PC or laptop.
Embedded vision solutions can also be implemented with so-called system on modules (SoM) or computer on modules (CoM). These modules represent a computing unit. For the adaptation of the desired interfaces to the respective application, a so-called individual carrier board is needed. This is connected to the SoM via specific connectors and can be designed and manufactured relatively simply. The SoMs or CoMs (or the entire system) are cost-effective on the one hand since they are available off-the-shelf, while on the other hand they can also be individually customized through the carrier board.
For large manufactured quantities, individual processing boards are a good idea.
All modules, single-board computers, and SoMs, are based on a system on chip (SoC). This is a component on which the processor(s), controllers, memory modules, power management and other components are integrated on a single chip.
Due to these efficient components, the SoCs, embedded vision systems have only become available in such a small size and at a low cost as today.

CHARACTERISTICS OF EMBEDDED VISION SYSTEMS VERSUS STANDARD VISION SYSTEMS

Most of the above-mentioned single-board computers and SoMs do not include the x86 family processors common in standard PCs. Rather, the CPUs are often based on the ARM architecture.
The open-source Linux operating system is widely used as an operating system in the world of ARM processors. For Linux, there is a large number of open-source application programs, as well as numerous freely-available program libraries.
Increasingly, however, x86-based single-board computers are also spreading.
A consistently important criterion for the computer is the space available for the embedded system.
For the SW developer, the program development for an embedded system is different than for a standard PC. As a rule, the target system does not provide a suitable user interface which can also be used for programming. The SW developer must connect to the embedded system via an appropriate interface if available (e.g. network interface) or develop the SW on the standard PC and then transfer it to the target system.
When developing the SW, it should be noted that the HW concept of the embedded system is oriented to a specific application and thus differs significantly from the universally usable PC.
However, the boundary between embedded and desktop computer systems is sometimes difficult to define. Just think of the mobile phone, which on the one hand has many features of an embedded system (ARM-based, single-board construction), but on the other hand can cope with very different tasks and is therefore a universal computer.

WHAT ARE THE BENEFITS OF EMBEDDED VISION SYSTEMS?

In some cases, much depends on how the embedded vision system is designed. A single-board computer is often a good choice as this is a standard product. It is a small compact computer that is easy to use. This solution is also useful for developers who have had little to do with embedded vision.
On the other hand, however, the single-board computer is a system which contains unused components and thus generally does not allow the leanest system configuration. This solution is suitable for small to medium quantities. The leanest setup is obtained through a customized system. Here, however, higher integration effort is a factor. This solution is therefore suitable for large unit numbers.
The benefits of embedded vision systems at a glance:
  •  Lean system design
  •  Light weight
  •  Cost-effective, because there is no unnecessary hardware
  •  Lower manufacturing costs
  •  Lower energy consumption
  •  Small footprint

WHICH INTERFACES ARE SUITABLE FOR AN EMBEDDED VISION APPLICATION?

Embedded vision is the technology of choice for many applications. Accordingly, the design requirements are widely diversified. Depending on the specification, BASLER offers a variety of cameras with different sensors, resolutions and interfaces.
The two interface technologies that Basler offers for embedded vision systems in the portfolio are:
  •  USB3 Vision for easy integration and
  •  Basler BCON for LVDS for a lean system design
Both technologies work with the same Basler pylon SDK, making it easier to switch from one interface technology to the other.

USB3 VISION

USB 3.0 is the right interface for a simple plug and play camera connection and ideal for camera connections to single-board computers. The Basler pylon SDK gives you easy access to the camera within seconds (for example, images and settings), since USB 3.0 cameras are standard-compliant and GenICam compatible.
Benefits
  •  Easy connection to single-board computers with USB 2.0 or USB 3.0 connection
  •  Field-tested solutions with Raspberry Pi®, NVIDIA Jetson TK1 and many other systems
  •  Profitable solutions for SoMs with associated base boards
  •  Stable data transfer with a bandwidth of up to 350 MB/s

BCON FOR LVDS

BCON - Basler's proprietary LVDS-based interface allows a direct camera connection with processing boards and thus also to on-board logic modules such as FPGAs (field programmable gate arrays) or comparable components. This allows a lean system design to be achieved and you can benefit from a direct board-to-board connection and data transfer.
The interface is therefore ideal for connecting to a SoM on a carrier / adapter board or with an individually-developed processor unit.
If your system is FPGA-based, you can fully use its advantages with the BCON interface.
BCON is designed with a 28-pin ZIF connector for flat flex cables. It contains the 5V power supply together with the LVDS lanes for image data transfer and image triggering. You can configure the camera vialanes that work with the I²C standard.
BASLER'S PYLON SDK is tailored to work with the BCON for LVDS interface. Therefore, it is easy to change settings such as exposure control, gain, and image properties using your software code and pylons API. The image acquisition of the application must be implemented individually as it depends on the hardware used.
Benefits
  •  Image processing directly on the camera. This results in the highest image quality, without compromising the very limited resources of the downstream processing board.
  •  Direct connection via LVDS-based image data exchange to FPGA
  •  With the pylon SDK the camera configuration is possible via standard I²C bus without further programming. The compatibility with the GenICam standard is given.
  •  The image data software protocol is openly and comprehensively documented
  •  Development kit with reference implementation available
  •  Flexible flat flex cable and small connector for applications with maximum space limitations
  •  Stable, reliable data transfer with a bandwidth of up to 252 MB/s

HOW CAN AN EMBEDDED VISION SYSTEM BE DEVELOPED AND HOW CAN THE CAMERA BE INTEGRATED?

Although it is unusual for developers who have not had much to do with embedded vision to develop an embedded vision system, there are many possibilities for this. In particular, the switch from standard machine vision system to embedded vision system can be made easy. In addition to its embedded product portfolio, Basler offers many tools that simplify integration.
Find out how you can develop an embedded vision system and how easy it is to integrate a camera in our simpleshow video.

MACHINE LEARNING IN EMBEDDED VISION APPLICATIONS

Embedded vision systems often have the task of classifying images captured by the camera: On a conveyor belt, for example, in round and square biscuits. In the past, software developers have spent a lot of time and energy developing intelligent algorithms that are designed to classify a biscuit based on its characteristics (features) in type A (round) or B (square). In this example, this may sound relatively simple, but the more complex the features of an object, the more difficult it becomes.
Algorithms of machine learning (e.g., Convolutional Neural Networks, CNNs), however, do not require any features as input. If the algorithm is presented with large numbers of images of round and square biscuits, together with the information which image represents which variety, the algorithm automatically learns how to distinguish the two types of biscuits. If the algorithm is shown a new, unknown image, it decides for one of the two varieties because of its "experience" of the images already seen. The algorithms are particularly fast on graphics processor units (GPUs) and FPGAs.




TO KNOW MORE ABOUT BASLER CAMERA DISTRIBUTOR IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM



 MV ASIA INFOMATRIX PTE LTD

3 Raffles Place, #07-01 Bharat Building,Orchard RoadSingapore - 048617Tel: +65 63296431 Fax: +65 63296432 


Friday 7 July 2017

HIGH-SPEED CAMERA TECHNOLOGY


Bayer Filter
  •  Nearly all color sensors follow the same principle (according to its inventor Dr.Bryce E. Bayer).

  •  The light sensitive cells or pixels on the sensor are only capable of distinguishing different levels of light. For this reason tiny color filters (red,green and blue) are placed in front of the pixel as part of the production process.

  •  In a subsequent step of image processing the filtered output values are combined to a “color pixel” again.

  •  To adapt closer to the perception of the human eye (which is much more green-sensitive than to other colors), twice as many green filters are used.
Burst Trigger Mode
  •  Generally a trigger event indicates the camera when to start recording, after a predefined amount of time (or when the memory is full) the recording stops.
  •  Depending on the application yet another trigger event tells the camera when to terminate the recording.
  •  In Burst Trigger Mode however the camera records as long and as often as the trigger is active (comparable to the triggering mechanism of a machine gun).
Mikrotron High Speed Camera in Singapore
CCD / CMOS comparison
  •  Abbreviations for the two main sensor technologies, describing the inner structure of the chip:

  •  „CMOS“: complementary metal-oxide semiconductor

  •  „CCD“: charge coupled device
 CCD:
A CCD-sensor provides a determined electrical charge per pixel, i.e. a certain amount of electrons according to the previous exposure.
These have to be captured pixel by pixel with a subsequent electronic circuit, converted into a voltage quantity and recalculated into a binary value.
This operation is rather time consuming. In addition the whole frame has to be grabbed, which requires comprehensive postprocessing.
 CMOS:
CMOS sensors can be produced cheaper and offer the possibility of onboard preprocessing, the information of every pixel can be provided in a digitised mode.
  •  Thus the camera may be designed smaller and random acces to particular parts of the image (“ROI”, region of interest) is possible.

  •  Needing less external circuits results in reduced power consumtion of the camera, the stored frames can be read out much faster.
Dynamic Range Adjustment
  •  The human eye has a very extensive dynamic range, i.e. can evaluate very low lighting conditions (like candle- or starlight) as well as extreme light impressions (reflected sunlight on a water surface).

  •  This corresponds to a (logarhithmic) dynamic range of 90dB.That means, two objects with 1,000,000,000 times different quantity of light can both be seen clearly.

  •  Unlike this, a CMOS camera has a linear dynamic range of about 60dB which equals a ratio of 1:1000.

  •  If for instance a recording setup requires to identify dim component labels with large welding reflections, image details within the reflection area can not be seen.

  •  Cameras with Dynamic Range Adjustment enable the user to adjust the linear response in certain areas: overexposed objects become darker without loosing intensitiy on the dark ones.

  •  Thus minimal variations of luminosity can be detected, even in areas

  •  of intense reflective light.
Fixed Pattern Noise (FPN)
  •  Every single pixel or photodiode in a CMOS camera has a construction related tolerance.

  •  Even without any exposure to light the diodes generate slightly varying output values.

  •  To avoid a corruption of the image, a process similar to the white balance in digital photography compares a reference picture with a dark frame.

  •  This frame contains only the detected differences and is used to correct the subsequent images of the sensor.

  •  Only after this kind of postprocessing e.g. a plain white area is displayed homogenously white.
Gigabit Ethernet (GigE)
  •  This data transfer technology allows the transmission among various devices (server, printer, mass storage, cameras) within a network.

  •  While standard Ethernet is to slow for the transfer of comprehensive image data, Gigabit Ethernet (GigE) with a maximum transfer rate of 1000Mbit/s or 1 Gigabit per second ensures a dependable image transfer in machine vision cameras.
GigE Vision
  •  GigE Vision is a industrial standard, developed by the AIA (Automated Imaging Association) for high performance machine vision cameras, optimised for the transfer of large amounts of image data.

  •  GigE Vision bases on the network structure of Gigabit Ethernet and includes a hardware interface standard (Gigabit Ethernet) and communication protocolls as well as standardised communication- and controlmodi for cameras.

  •  The GigE Vision camera control is based on a command structure named GenICam.

  •  This establishes a common camera interface to enable communication with third party vision cameras without any customisation.
ImageBLITZ automatic trigger
  •  To capture an unpredictable or unmeasurable event for "inframe" triggering purpose, Mikrotron invented the ImageBLITZ operation mode.

  •  In most cases no further equipment or elaborate trigger sensing devices for camera control are needed, the picture itself is the trigger.

  •  Within certain limits the ImageBLITZ is adjusted to react only to the expected changes in a predefined area of the picture.
Multi Sequence Mode
  •  In this mode the available memory of the camera is divided into many individual sequences. Following each trigger event (e.g. keystroke or a light barrier is set off) a predefined number of frames is saved.

  •  In repeatedly occuring events the different variations can be compared and provide a valuable base for the analysis of malfunctions or technical processes.

  •  Even a previously determined amount of frames before and after the trigger event can be saved within every recorded sequence.
Sobel Filter
  •  In several machine vision applications as motion analysis, positioning or pattern matching it is essential to determine certain edges, outlines or coordinates.

  •  The Sobel filter uses an edge-detection algorithm to detect just those edges and produces a chain of pixels (just on/off) that resembles the edges.

  •  This process allows to cut down the data stream already in the FPGA-chip of the camera for more than 80%. Less data has to be transferred and processed, the transfer rate rises considerably.
Suspend to Memory Mode
  •  The operation of a camera is reduced to the preservation of recorded images.

  •  Due to resulting low power consumtion the charge of the storage battery lasts significantly longer.

  •  This mode is activated either automatically after recording or manually by pressing a button.

  •  Thus the recording memory can be preserved for 24 hours.




TO KNOW MORE ABOUT MIKROTRON HIGH SPEED CAMERA DISTRIBUTOR IN SINGAPORE, CONTACT MVASIA INFOMATRIX PTE LTD AT +65 6329-6431 OR EMAIL US AT INFO@MVASIAONLINE.COM


 MV ASIA INFOMATRIX PTE LTD


3 Raffles Place, #07-01 Bharat Building,
Orchard Road
Singapore - 048617
Tel: +65 63296431 
Fax: +65 63296432