Friday, 24 April 2020


The difference between vision sensors and vision systems is fairly basic:

A vision sensor does simple inspections like answering a simple yes-no question on a production line. A vision system does something complex like helping a robot arm weld parts together in an automated factory.

Machine vision sensors capture light waves from a camera’s lens and work together with digital signal processors (DSPs) to translate light data into pixels that generate digital images. Software analyzes pixel patterns to reveal critical facts about the object being photographed.
Automated production doesn’t have to mean robots building pickup trucks and smartphones. Many automated factory tasks require simple, straightforward kinds of vision sensor data:

  • Presence or absence. Is there a part within the sensor’s field of view? If the sensor answers yes, then machine vision software gives the OK to move the part to its correct place in the production process.
  • Inspection. Is the part damaged or flawed? If the sensor sees defects, then the part gets routed out of production.
  • Optical character recognition (OCR). Does the part contain specific words or text? Answering this question can help automated systems sort products by brand name or product description.
Cognex machine vision systems use multiple sensors to perform all of these basic tasks plus many more complicated challenges:
  1. Guides/alignment: When parts require an exact position or alignment, vision systems use sensors to identify the correct parts and place them exactly where they need to go.
  2. Code reading: Codes on packages and individual components contain vital data that vision systems acquire in real time to sort finished goods and differentiate between parts within a production process.
  3. Gauges/measurement: Sensors can ensure that machined parts are cut to the proper dimensions.
  4. 3D imaging: Sensors create three-dimensional representations of parts and products. These images can help automate inspections and tell robotic arms where to pick up and place parts.
Every company has to decide whether they need simple vision sensors or more advanced vision systems. Vision sensors are designed to be easy to install and implement, so factory personnel typically can set them up and configure them without a lot of outside assistance. When the imaging job requires a simple go/no-go decision, vision sensors may be all the company needs.

Vision systems, by contrast, require more expertise and a significant investment of time and money for configuration, installment and training. Often, companies turn to third-party integrators who have deep expertise in vision system installations.

Every company in the machine vision sector has its own way of defining the difference between machine vision sensors and systems. Cognex, for instance, builds vision sensors that perform specific kinds of tasks, like quality control in food processing. Our vision systems combine advanced software with industrial-strength cameras to enable a broad spectrum of factory automation applications.

One way to distinguish between vision systems and sensors is to imagine hundreds of beer bottles on a conveyor belt in a bottling plant. A vision sensor can make sure every bottle has a cap. If the cap is there, then the bottle gets approved and sent to packaging, where another sensor makes sure every six-pack has six bottles.

But the bottling company may want to identify when a bottle cap is skewed past a certain angle. Or, perhaps they want to ensure that the six-pack doesn’t accidentally mix multiple beer varieties. That’s more likely to require a vision system.


Monday, 23 March 2020


Credits :

The pace of technology’s change over the last decade has been nearly unprecedented in human history and it’s only poised to become even more breathtaking in the years ahead: blockchain, robotics, edge computing, artificial intelligence (AI), big data, 3D printing, sensors, machine vision, internet of things, are just some of the massive technological shifts on the cusp for industries

Strategically planning for the adoption and leveraging of some or all these technologies will be crucial in the manufacturing industry. In the United States, manufacturing accounts for $2.17 trillion in annual economic activity, but by 2025 – just half a decade away – McKinsey forecasts that “smart factories” could generate as much as $3.7 trillion in value. In other words, the companies that can quickly turn their factories into intelligent automation hubs will be the ones that win long term from those investments.

“If you’re stuck to the old way and don’t have the capacity to digitalize manufacturing processes, your costs are probably going to rise, your products are going to be late to market, and your ability to provide distinctive value-add to customers will decline,” Stephen Ezell, an expert in global innovation policy at the Information Technology and Innovation Foundation, says in a report from Intel on the future of AI in manufacturing.

These technologies as applied in a factory or manufacturing setting are no longer nice to have, they are business critical. According to a recent research report from Forbes Insights, 93% of respondents from the automotive and manufacturing sectors classified AI as ‘highly important’ or ‘absolutely critical to success’. And yet, only 56% of these respondents plan to increase spending on artificial intelligence by less than 10%.

The disconnect between recognizing the importance of new technologies that allow for more factory automation and the willingness to spend on them will be the difference between those companies that win and those that lose. Perhaps this reticence to invest in something like AI could be attributed to the lack of understanding of its ROI, capabilities, or real-world use cases. Industry analyst Gartner, Inc. still slots many of AI’s applications into the “peak of inflated expectations” after all.

But AI, specifically deep learning or examples-based machine vision, combined with traditional rules-based machine vision can give a manufacturing factory and its teams superpowers. Take a process such as the complex assembly of a modern smartphone or other consumer electronic devices. The combination of rules-based machine vision and deep learning can help robotic assemblers identify the correct parts, identify differences like missing screws or misaligned casings, help detect if a part was present or missing or assembled in a different place on the product, and more quickly determine if those were problems. And they can do this at an unfathomable scale.

The combination of machine vision and deep learning are the on-ramp for companies to adopt smarter technologies that will give them the scale, precision, efficiency, and financial growth for the next generation. But understanding the nuanced differences between traditional machine vision and deep learning and how they complement each other, rather than replace, are essential to maximizing those investments.


Thursday, 20 February 2020


Credits :

Since its inception in the 1980s, machine vision has concerned itself with two things: improving the technology’s power and capability and making it easier to use. Today, machine vision is turning to higher-resolution cameras with greater intelligence to empower new automated solutions both on and off the plant floor — all with a simplicity of operation approaching that of the smartphone, which significantly reduces engineering requirements and associated costs.

And, just like in other industries which are benefiting from rapid advancements in technology like big data, the cloud, artificial intelligence (AI), and mobile, so too will manufacturers, logistics operations, and other enterprises benefit from three key advances in machine vision for automation.


While 1-, 2-, and 5-megapixel (MP) cameras continue to make up the bulk of machine vision camera shipments, we’re seeing considerable interest in even higher-resolution smart cameras, up to 12 MP. High-resolution sensors mean that a single smart camera inspecting an automobile engine can do the work of several lower resolution smart cameras while maintaining high-accuracy inspections.

Cognex’s patent-pending High Dynamic Range Plus (HDR+) image processing technology provides even better image fidelity than your typical HDR. It will help smart cameras inspect multiple areas across large objects where lighting uniformity is less than ideal. In the past, lighting variations could be mistaken for defects or the feature was not even visible. Today, HDR+ helps reduce the effects of lighting variations, enabling applications in challenging environments that were beyond the capability of machine vision technology just a few years ago.

While advanced smart cameras run HDR+ technology on field-programmable gate arrays (FPGAs) to improve the quality of the acquired image at frame rate speeds, complementary sensor technology, such as time-of-flight (ToF) sensors, are being incorporated to enable “distance-based dynamic focus”.

The new high-powered integrated torch (HPIT ) image formation system, using ToF distance measurement and high-speed liquid lens technology, are also making an impact by enabling dynamic autofocus at frame rate. The newest barcode readers incorporate HPIT capability for applications such as high-speed tunnel sortation and warehouse management in situations where packages and product size can vary significantly, requiring the camera to quickly adapt to different focal ranges.


Just like AI’s impact in other industries, deep learning vision software for factory automation is allowing enterprises to automate inspections that were previously only able to do manually or more efficiently solve complex inspection challenges that are cumbersome or time-consuming to do with traditional rule-based machine vision.

The biggest use driving the investment in deep learning is the potential of re-allocating, in many cases, hundreds of human inspectors with deep learning-based inspection systems. For the first time, manufacturers have a technology that offers an inspection solution that can achieve comparable performance to that of a human.

One example of how deep learning will benefit organizations is in defect detection inspection. Every manufacturer wants to eliminate industrial defects as much as possible and as early as possible in the manufacturing process to reduce downstream impacts that cost time and money.

Defect detection is challenging because it is nearly impossible to account for the sheer amount of variation in what constitutes a defect or what anomalies might fall within the range of acceptable variation.

As a result, many manufacturers utilize human inspectors at the end of the process to perform a final check for unacceptable product defects. With deep learning, quality engineers can train a machine vision system to learn what is an acceptable or unacceptable defect from a data set of reference pictures rather than program the vision system to account for the thousands of defect possibilities.


An important development for smart camera vision systems enabling Industry 4.0 initiatives is Open Platform Communications Unified Architecture (OPC UA). With contributions from all major machine vision trade associations around the world, OPC UA is an industrial interoperability standard developed to help machine-to-machine communication.

Combined with advanced sensor technology and trends such as deep learning, OPC UA will help transition machine vision technology from a point solution to bridge the industrial world inside the plant and the physical world outside it. Today, vision systems and barcode readers are key sources of data for modern enterprises.



Credits :

For the automotive industry, pedestrian safety has been a serious concern since the horseless carriage. Londoner Arthur Edsall was the first driver to strike and kill a pedestrian in 1896 at a speed of four miles per hour. It took the U.S. Congress almost seventy years to impose automotive safety standards and mandate the installation of safety equipment and another thirty years before airbags became a required safety feature. Automotive safety standards in the United States are promulgated by a process of reviewing accidents after they have occurred.

In 2019, the National Transportation Safety Board (“NTSB”) finally addressed this standards - promulgation process in their Most Wanted List of transportation safety improvements calling for an increase in the implementation of collision-avoidance systems in all new highway vehicles. The progression of this change in policy derived from the 2015 study (SIR-15/01) that described the benefits of forward-collision-avoidance systems and their ability to prevent thousands of accidents.

After that report was published, an agreement was reached with the National Highway Traffic Safety Administration (“NHTSA”) and the Insurance Institute for Highway Safety that would require compliance with the Automatic Emergency Braking standard (“AEB”) on all manufactured vehicles by 2022. However, the agreement did not identify the specific technology that would enable AEB, and the question remains whether such technology is readily available and economically viable for industry-wide adoption.


The pace of technology over the last thirty years has been astronomical, yet technology to make driving safer has not kept pace. A computer that not too long ago was the size of a garage now fits into the palm of your hand. Today driving should be safer than ever, but the reality is that without the implantation of available modern technologies, the uncertainties of the road will always be with us. According to the NHTSA, there were 37,461 traffic fatalities in 2016 in the United States.

In 2015, there were a total of 6,243,000 passenger car accidents. 1 Globally, there is a fatality every twenty-five seconds and an injury every 1.25 seconds. In the United States there is a fatality every thirteen minutes and an injury every thirteen seconds. These statistics are mind blowing. Compared to recent events affecting the aviation industry, two Boeing 737 MAX 8 airplanes crashed killing 346 people, the same number of people that die as a result of automobile accidents every 144 minutes, and all Boeing 737 MAX 8 airplanes were grounded

The cost for automotive accidents is high. According to the national safety counsel, in the United States, the annual cost of health care resulting from cigarette smoking is approximately $300 billion whereas the annual cost of health care for injuries arising from automobile accidents is roughly $415 billion.

Technology to protect automobile occupants has reduced the number of driver and passenger fatalities. However, the number of people who die as a result of an accident outside the automobile continue to climb at an alarming rate. Pedestrians are at the greatest risk, especially after dark.

The NHTSA reports that in 2018, 6,227 pedestrians were killed in United States traffic accidents, with seventy-eight percent of pedestrian deaths occurring at dusk, dawn, or night.2 In the United States, pedestrian fatalities have increased forty-one percent since 2008. Solutions to address pedestrian fatalities are needed to meet the standards by 2022.


Ultimately, it is safer cars and safer drivers that make driving safer, and automotive designers need to deploy every possible technological tool to improve driver awareness and make cars more automatically responsive to impending risks. Today’s safest cars can be equipped with a multitude of cameras and sensors to make them hyper-sensitive to the world around them and intelligent enough to take safe evasive action as needed. Microprocessors can process images and identify subject matter 1,000,000 times faster than a human being

Advanced Driver Assist Systems (“ADAS”) are becoming the norm, spotting potential problems ahead of the automobile making auto travel safer for drivers, passengers, and pedestrians, not to mention the more than one million ‘reported’ animals struck by automobiles in the United States annually resulting in $4.2 billion in insurance claims each year. The advances we have seen so far are the first steps to evolving towards a future of truly autonomous vehicles that will revolutionize both personal and commercial transportation.

Drivers need no longer rely on eyes alone to maintain situational awareness. Early generations of vision-assisting cameras were innovative, but they were not particularly intelligent and could do little to perceive the environment around the car and communicate information that could be used for driver decision-making.

Today, with tools such as radar, light detection and ranging (“LIDAR”), cameras, and ultrasound installed, a car knows much more about the environment than the driver does and can control the vehicle faster and safer than the human driver. Risky driving conditions such as rain, fog, snow, and glare, are less hazardous when a driver is assisted by additional onboard sensors and data processors.

One of the most advanced automotive sensors is a thermal sensor that allows a driver and the automobile to perceive the heat signature of anything ahead of the driver. Previously used mainly for military and commercial applications, early forms of night vision first came to the mainstream automotive market in the 2000 Cadillac DeVille, albeit as a cost-prohibitive accessory priced at almost at a cost approaching $3,000.

Since then, thermal cameras and sensors have become smaller, lighter, faster and cheaper. After years of exclusive availability in luxury models, thermal sensors are now ready to take their place among other automotive sensors to provide a first line of driving defense that reaches far beyond the reach of headlights in all vehicles, regardless of the cost of the vehicle.


Thursday, 11 April 2019



APRIL, 2019  
Thanks to technical, scientific and medical progress, human life expectancy has increased considerably in recent decades. Precise, highly technical and increasingly automated equipment in large hospitals and labs now provides valuable support in numerous measuring and analytical tasks.

Medical Automation Product Dealers in Singapore
                                                                  Credits :


The concept of lab automation in general can be interpreted in many ways, and includes various tasks: from simple applications such as weighing, to complex robotic and analytical systems, process tracking, and storage systems. This results in numerous possible camera applications in the medical, scientific, pharmaceutical and analytical fields. Some of these are obviously recognizable, such as those in an imaging urine sediment analysis device, while others run in the background and provide information for the medical diagnosis we receive from a physician. Others in turn support processes that are internal to the devices, and not directly connected with the actual detection process. These applications range from the simple input of a barcode to the support of laser technologies; from the path traversed by blood, starting with the prick of the needle when the blood is drawn, to the various test processes and then the result, up to complex processes in cell technology which offer scientists insights into the origins of diseases, thus advancing diagnostic and therapeutic innovations.


Hospital and research labs increasingly follow the trend towards automation. The essential drivers for this development are


Health systems and research institutions are subject to growing economic strains, and try to counteract this pressure with cost reductions in their services. Automation through modern technologies with inexpensive system components makes it possible to lower costs in the lab equipment, relieves the staff, and frees up capacities that can be utilized elsewhere.


The faster processing of analyses enables more analyses per time spent for clinical and analytical contract laboratories, giving them an advantage over competitors since they can serve their customers faster. Automation can also help generate more results per time in research, which shortens project periods and makes new developments or technologies available sooner.


Many examinations, once manually executed, are increasingly handled by machines, whose technological features make it possible to complete these tasks with greater precision and improved reproducibility. Thanks to an applied vision system and automated microscopy, for example, researchers can now view detailed and precise image data on their office monitors without having to look through eyepieces in darkrooms. Furthermore, the captured image data offers the capability of documentation and archiving, which meets the growing demands of quality management systems. Automated systems also aren’t subject to the process-related variances of manual work steps, giving them greater reproducibility and paving the way for advancing standardization. Digital image data can be viewed across different locations if desired, e.g. for a scientific exchange or an external diagnostic consultation. The conditions for a reliable diagnostic statement are therefore improved by camera-supported examinations and analyses.


Lab automation efficiently makes new technologies accessible to many users. This makes it possible for research to determine the pathogenic processes of diseases more quickly. As a result,for example, diseases can be recognized earlier with the help of molecular-biological analyses in in-vitro diagnostics, which may reduce or even prevent their onset and the associated costly therapies that are so strenuous for patients. Devices that are easy to use and inexpensive enable diagnostics even in regions with economic- and infrastructure-related challenges. This means medical care can be improved in epidemic regions, since staff in those areas are often less well-trained, lab equipment has a lower standard overall, and the financial means of the affected patients are low. Here we can expect an increasing amount of so-called POC systems (POC = point of care) and lab-on-a-chip technologies.


Below are some examples of typical application areas for automated, camera-based applications in labs:


This includes general camera applications that generate imaging and data, not for purely analytical but for process-supporting purposes, e.g. barcode/matrix code compilation, as it is applied in most devices for in-vitro diagnostics (IVD). This could involve the simple identification of a patient’s sample vial or the transmission of data from the used reagent, which the device needs in order to calculate the analyses and a batch documentation for purposes of quality management. In an automatic exchange with a lab information system, the right results are thus attributed to the requirements of a patient sample and managed digitally.

Many lab devices work with liquid test material. Depending on the application area, different parameters in this so-called liquid handling process must be determined and/or checked. This may be e.g. the state of the liquid (no air may be pipetted, since it would falsify the analysis result), the type of vial, the color of the lid to code the test material inside (e.g. whether it is a serum or whole blood vial), or color properties / layers or irregularities (bubbles, foam) in the liquid. Cameras may offer advantages since they don’t need contact with the sample, and don’t necessitate a removal of the lid, in contrast to other methods such as the capacitive determination of the liquid state. This prevents such problems as contamination, and enables higher flow rates.


Automated microscopy includes, for instance, applications of light and fluorescence microscopy for in-vitro diagnostics (IVD), in life sciences, pharmaceutical research and in digital pathology.

Different manufacturers use camera systems in their devices to diagnose autoimmune diseases, or diagnose diseases of the blood and hematopoietic organs in hematology, as well as in digital pathology. Pathologists examine tissue sections or cell samples for pathological changes. To this end, they prepare slides which can be examined by microscope to draw conclusions about diseases and provide valuable information for the diagnosis and therapy options, which may not be discernible through other means such as radiology.

There is a wide selection of additional automatic microscope systems with different purposes. From a small device the size of half a shoe box, used for simple cell counting, to systems that are used directly in incubators and enable time-dependent life cell imaging without manual intervention, all the way to the high-content screening systems that are used e.g. in pharmaceutical substance screening – anything is possible.


In addition to the above mentioned fields there is a wide variety of other potential applications and use cases of cameras in the whole scientific field, as for example in protein and nucleic acid analytics, microbiology, particle analytics and more. It’s important to offer cameras with the right features to cover a wide range of applications in the various specialty areas. Independently of the camera’s specific product features, it should offer easy and flexible integrat

ion with an efficient and comfortable SDK, and, of course, provide high quality and reliability. Technically excellent support, quickly and readily available, also simplifies the integration process for the system developer.



Thursday, 28 March 2019


March 2019

In recent years, a miniaturization trend has been established in many areas of electronics. For example, ICs have become more and more integrated and circuit boards in the electrical industry have become smaller and more powerful. This has also made PCs, mobile phones and cameras more and more compact and also more powerful. This trend can also be observed in the world of vision technology.

Machine Vision Cameras Dealer in Singapore Asia

A classic machine vision system consists of an industrial camera and a PC:

Both were significantly larger a few years ago. But within a short time, smaller and smaller-sized PCs became possible, and in the meantime, the industry saw the introduction of single-board computers (SPCs), i.e. computers that can be found on a single board. At the same time, the camera electronics became more compact and the cameras successively smaller. On the way to even higher integration, small cameras without housings are now offered, which can be easily integrated into compact systems.

Due to these two developments, the reduction in size of the PC and the camera, it is now possible to design highly compact camera vision systems for new applications. These systems are called embedded (vision) systems.

Design and use of an embedded vision system

An embedded vision system consists, for example, of a camera, so-called board level camera, which is connected to a processing board. Processing boards take over the tasks of the PC from the classic machine vision setup. As processing boards are much cheaper than classic industrial PCs, vision systems can become smaller and also more cost-effective. The interfaces for embedded vision systems are primarily USB, Basler’s BCON for MIPI or BCON for LVDS.

Embedded vision systems are used in a wide range of applications and devices, such as in medical technology, in vehicles, in industry and in consumer electronics. Embedded systems enable new products to be created and thereby create innovative possibilities in several areas.

Which Embedded Systems Are Available?

A so-called SoC (system on chip) lies at the heart of all embedded processing solutions. This is a single chip on which the CPU (which may be multiple units), graphics processors, controllers, other special processors (DSP, ISP) and other components are integrated.

Due to these efficient SoC components, embedded vision systems have become available in such a small size and at a low cost only recently.

As embedded systems, there are popular single-board computers (SBC), such as the Raspberry Pi® or DragonBoard®. These are mini-computers with the established interfaces (USB, Ethernet, HDMI, etc.) and a range of features similar to traditional PCs or laptops, although the CPUs are of course less powerful.

Embedded vision solutions can also be designed with a so-called SoM (system on module, also called computer on module or CoM). In principle, an SoM is a circuit board which contains the core elements of an embedded processing platform, such as the SoC, storage, power management, etc. An individual carrier board is required for the customization of the SoM to each application (e.g. with the appropriate interfaces). This is connected to the SoM via specific connectors and can be designed and manufactured relatively simply. The SoMs (or the entire system) are cost-effective on the one hand since they are available off-the-shelf, while on the other hand they can also be individually customized through the carrier board.

Completely individual processing boards in the form of a full custom design may also be a sensible choice for high quantities.

Characteristics of Embedded Vision Systems versus Standard Vision Systems

Most of the above-mentioned single board computers and SoMs do not include the x86 family processors common in standard PCs. Rather, the CPUs are often based on the ARM architecture.

The open-source Linux operating system is widely used as an operating system in the world of ARM processors. For Linux, there is a large number of open-source application programs, as well as numerous freely-available program libraries. Increasingly, however, x86-based single-board computers are also spreading. A consistently important criterion for the computer is the space available for the embedded system.

For the software developer, the program development for an embedded system is much more complex than for a standard PC. While the PC used in standard software development is also the main target platform (meaning the type of computer which the program is later intended to run on), this is different in the case of embedded software, where the target system generally can’t be used for the development due to its limited resources (CPU performance, storage). This is why the development of embedded software also uses a standard PC on which the program is coded and compiled with tools that may get very complex. The compiled program must then be copied to the embedded system and subsequently be debugged remotely.

When developing the software, it should be noted that the hardware concept of the embedded system is oriented to a specific application and thus differs significantly from the universally usable PC.

However, the boundary between embedded and desktop computer systems is sometimes difficult to define. Just think of the popular Raspberry Pi, which on the one hand has many features of an embedded system (ARM-based, single-board construction), but on the other hand can cope with very different tasks and, with the connection of a monitor, mouse and keyboard, is therefore a universal computer.

What Are the Benefits of Embedded Vision Systems?

In some cases, much depends on how the embedded vision system is designed. An SBC (single-board computer) is often a good choice, as this is a standard product. It is a small compact computer that is easy to use. This solution is also useful for developers who have had little to do with embedded vision.

On the other hand, however, the single-board computer is a system which contains unused components and thus generally does not allow the leanest system configuration. For that reason, this approach is not very economical in terms of manufacturing costs and is more suitable for small unit numbers, where the development costs must be kept low while the manufacturing costs are of secondary importance.

The leanest setup is obtained with a full-custom design, a system that is highly optimized for individual applications. But this involves high integration costs and the associated high development expenditures. This solution is therefore suitable for large unit numbers.

An approach with a conventionally available system on module (SoM) and an appropriately customized carrier board presents a compromise between an SBC and a full-custom design (also see above: “Which embedded systems are available? “). The manufacturing costs are not as optimized as in a full custom design (after all, a setup with a carrier board plus a more or less generic SoM is a bit more complex), but at least the hardware development costs are lower, since the significant part of the hardware development is already completed with the SoM. This is why a module-based approach is a very good choice for medium-level unit numbers in which the manufacturing and development costs must be well-balanced.

The benefits of embedded vision systems at a glance:
  • Leaner system design
  • Light weight
  • Cost-effective, because there is no unnecessary hardware
  • Lower manufacturing costs
  • Low energy consumption
  • Small footprint

To Know More About Machine Vision Cameras Dealer in Singapore Asia , Contact MVAsia Infomatrix Pte Ltd at +65 6329-6431 or Email us at

Saturday, 2 February 2019


As the plant floor has become more digitally connected, the relationship between robots and machine vision has merged into a single, seamless platform, setting the stage for a new generation of more responsive vision-driven robotic systems. BitFlow, Inc., a global innovator in frame grabbers used in industrial imaging, predicts vision-guided robots will be one of the most disruptive forces in all areas of manufacturing over the next decade.

"Since the 1960s robots have contributed to automation processes, yet they've done so largely blind," said Donal Waide, Director of Sales for BitFlow, Inc. "Vision-equiped robots are different. Now, just like a human worker, robots can see a specific part to validate whether it is being placed correctly in a pick and place application, for example. Cost savings will be realized since less hard fixturing is required and the robot is more flexible in its ability to locate a variety of different parts with the same hardware."
Bitflow Frame Grabber Cards Dealer Singapore


Using a combination of camera, cables, frame grabber and software, a vision system will identify a part, its orientation and its relationship to the robot. Next, this data is fed to the robot and motion begins, such as pick and place, assembly, screw driving or welding tasks. The vision system will also capture information that would be otherwise very difficult to obtain, including small cosmetic details that let the robot know whether or not the part is acceptable. Error-proofing reduces expensive quality issues with products. Self-maintenance is another benefit. In the event that alignment of a tool is off because of damage or wear, vision can compensate by performing machine offset adjustment checks on a periodic basis while the robot is running.


In should come as no surprise that the machine vision and robotic markets are moving in tandem. According to the Association for Advancing Automation (A3), robot sales in North America last year surpassed all previous records. Customers purchased 34,904 total units, representing $1.896 billion in total sales. Meanwhile total machine vision transactions in North America increased 14.8%, to $2.262 billion. The automotive industry accounts for appoximately 50% of total sales.


Innovations in how vision-guided robots perceive and respond to their environments are exactly what manufacturers are looking for as they develop automation systems to improve quality, productivity and cost efficiencies. These types of advancements rely on frame grabbers being paired with high-resolution cameras to digitize analog video, thus converting the data to a form that can be processed by software.

BitFlow has responded to the demands of the robotics industry by introducing frame grabbers based on the CoaXPress (CXP) machine vision standard, currently the fastest and most powerful interface on the market. In robotics applications, the five to seven meters restriction of a USB cable connection is insufficient. BitFlow CXP frame grabbers allow up to 100 meters between the frame grabber and the camera, without any loss in quality. To minimize cabling costs and complexity, BitFlow frame grabbers require only a single piece of coax to transmit high-speed data, as well as to supply power and send control signals.

BitFlow's latest model, the Aon-CXP frame grabber, is engineered for simplified integration into a robotics system. Although small, the Aon-CXP receives 6.25 Gb/S worth of data over its single link, almost twice the real world data rate of the USB3 Vision standard and significantly quicker than the latest GigE Vision data rates. The Aon-CXP is designed for use with a new series of single-link CXP cameras that are smaller, less expensive and cooler running than previous models, making them ideal for robotics.