This page is currently only accessible to registered members of our website. Please register here to view this content. Thank you.

 


AIA and the Association for Advancing Automation are hosting a week-long virtual conference, AIA Vision Week 2020, covering the latest in machine vision and imaging innovations.
Broken into 5 tracks, each day of AIA Vision Week will bring you a new set of sessions geared to help you with your current vision challenges and questions. As part of AIA Vision Week, you will have the opportunity to connect with more than 100 leading vision and imaging companies. Learn about their technology innovations and how they can help your company successfully deploy vision to increase your quality, efficiency and global competitiveness.

About AIA Vision Week

AIA brings you a full week of educational conference sessions, enlightening keynote speakers and connection to the industry’s top suppliers showcasing the latest vision and imaging technologies – all from your convenient computer, tablet or mobile phone screen.

Participation is FREE and anyone working with vision and imaging technologies – or those who would like to – are encouraged to register.

You’ll get access to educational sessions that are taught by leading vision experts where you’ll learn how vision can help you increase profitability, improve throughput, reduce defects, comply with regulations, solve your automation problems and more!

Whether you are seeking entry-level training for a basic understanding of machine vision, imaging, and sensors, or are looking for more advanced solutions, AIA Vision Week has something for you.

Each day of the week, starting Monday, May 18, we’ll have multiple conference sessions, followed by a break, then we’ll resume with more topics. The conference will be presented live based in Eastern Daylight Time (GMT-4), starting about 10:00am EDT and ending about 3:00pm EDT each day. See the agenda for details.

Be sure to spend time in the Vision Products Showcase, where you can see the latest in vision and imaging technologies and connect with more than 100 leading companies. You can learn about their technology innovations and how they can help your company successfully deploy vision to increase your quality, efficiency and global competitiveness.

AIA Vision Week Study Tracks

At this time, The AIA vision Week agenda is available in full. There are 5 unique learning tracks that attendees can leverage to learn more about each technology’s unique applications. Keynote speakers are delivering informative talks and breakout sessions deliver detailed looks into more specific areas of machine vision technology. come with questions and be prepared to learn about the latest in technological innovation relating to Machine Vision, AI, applications, and robotics. Here are the available learning tracks:

About The AIA – The World’s Largest Machine Vision Trade Association

Founded in 1984, the AIA was organized specifically to advance the global understanding and implementation of vision and imaging technologies to help our members grow. We are committed to providing support and leadership on common industry issues.

Today, AIA is the world’s largest global vision and imaging trade group serving over 375 member companies from 32 countries. Our members include manufacturers of vision components and systems, system integrators, distributors, OEMs, end-users, consulting firms, academic institutions and research groups directly involved with vision and imaging.

Key AIA activities include standards development for the industry; market research and analysis (including a quarterly Vision Market Report for members); trade show sponsorship – The Vision Show and Automate; educational workshops, conferences and networking opportunities throughout the world; Online and in-person certification training; and Vision Online, the world’s leading resource for vision and imaging information.

We invite you to see AIA’s Vision of the Future video here.

About Encompass Solutions

Encompass Solutions is a business and software consulting firm that specializes in ERP systems, EDI, and Managed Services support for Manufacturers and Distributors. Serving small and medium-sized businesses since 2001, Encompass modernizes operations and automates processes for hundreds of customers across the globe. Whether undertaking full-scale implementation, integration, and renovation of existing systems, Encompass provides a specialized approach to every client’s needs. By identifying customer requirements and addressing them with the right solutions, we ensure our clients are equipped to match the pace of Industry.


Automation is hitting the food and beverage packaging industry hard, with a recent report from the Association for Packaging and Processing Technologies revealing half of the companies it surveyed substantially increasing plant automation over the next three to five years. That said, those food and beverage manufacturing professionals surveyed admitted that while they knew automation had to be adopted, they didn’t know where to begin when it came to implementation of robotics, automation, or ERP software.

a photo of a robot in a food and beverage manufacturing facility where chocolate is made.

Advances in agriculture, food, and beverage processing automation and robotics technologies have been drastically changing how these industries operate.

Advances in agriculture, food, and beverage processing automation and robotics technologies have been drastically changing how these industries operate. The benefits to progress these technologies provide are too numerous and too significant to ignore. However, the trend is at odds with the very nature of these industries, which are among of the most highly controlled and restrictive in the world. This is one of the main reasons these manufacturers have only now been able to build momentum in modernizing operations.

Jonathan Wilkins, marketing director at EU Automation attributes the shift to the decrease in implementation costs as well as improved performance provided by more refined technologies in the sector. These manufacturers are ready to adopt new technology now that it has been proven to offset the rising costs of raw materials and the energy required in production.

Food And Beverage Manufacturing Advice From The Experts

Nigel Smith, CEO of industrial robotics specialist firm TM Robotics, has some choice advice for those in food and beverage manufacturing looking to adapt their operations alongside the rapidly changing manufacturing landscape. By incorporating the latest in robotics and advanced automation technology these enterprises can improve performance and overall value. Smith highlights a few key factors that will boost potential ROI and minimize the headaches associated with integrating new systems into existing infrastructure.

Plan With Purpose

No two systems are the same. Whether canning kombucha or creating ready-made meals on an assembly line, assessing the specific needs of a food and beverage manufacturing operation prior to implementation is essential. The right robots and associated hardware to compliment them will make or break your automation efforts. The software powering the backend is an often-overlooked element to consider, as well. Operating systems for your robotic workforce, trained human counterparts, and ERP software all come together to create a manufacturing floor working with precision, driving productivity and optimizing operations. Once the creation of goods is complete, what about distribution and order fulfillment? additional components of the sales cycle come into play long after your goods are packaged and ready to go. The right technology to compete in a modern distribution environment is essential, as well.

a photo of glass beer bottles on a bottling line in a food and beverage manufacturing brewery - encompass solutions.

Whether canning kombucha or creating ready-made meals on an assembly line, assessing the specific needs of your operation prior to implementation is essential.

 

When it comes to installation, designers and consultants need to think beyond the process, like their manufacturer, to understand the importance of consistency, compliance, and presentation. Partnering with the right system integrators and ERP consultants and managed services can make or break manufacturers’ modernization efforts.

Personalize Performance

Once you’ve completed the broad strokes of selection, you’ll need to zero-in on the supplementary components of your plan to really drive ROI.

food and beverage manufacturing in a grain sorting plant

Calibration is another key component that needs to be considered and deemed precise when handling products to ensure waste is kept to a minimum.

Let’s say you’ve got the SCARA machines to pick and place ingredients for each batch in your process, but do you know which grippers to use to ensure minimal damage of those ingredients? What about the right vision system to properly identify those ingredients or potential defects? Calibration is another key component that needs to be considered and deemed precise when handling products to ensure waste is kept to a minimum. These details are exactly why food and beverage manufacturers looking to modernize should work with experienced and reputable integrators and consultants to bring operations up to competitive pace.

About Encompass Solutions

Encompass Solutions, Inc. is an ERP consulting firm and Epicor Gold Partner that offers professional services in business consulting, project management, and software implementation. Whether undertaking full-scale implementation, integration, and renovation of existing systems or addressing the emerging challenges in corporate and operational growth, Encompass provides a specialized approach to every client’s needs. As experts in identifying customer requirements and addressing them with the right solutions, we ensure our clients are equipped to match the pace of Industry.


With discrete and batch process manufacturers already taking full advantage of advancements in robotics and machine vision technology to improve operations, winemakers and agricultural industries are increasingly leveraging machine vision AI and robotics to improve processes in the vineyard. Coupled with modern ERP systems, these manufacturers are better equipped to weather disruption and establish their foothold in a competitive marketplace.

Manufacturing, packaging, and production are already quite familiar with automation, vision systems, and collaborative robotics. Now, the more traditionally manual and complex areas of agriculture are beginning to feel the influence of these powerful technologies.

AI, Robotics And Automation In Agriculture

For example, Researchers at the Agriculture and Biological Engineering Group at the University of Illinois at Urbana-Champaign are making big pushes for robotics and automation proliferation into agriculture, forestry, and fisheries. The group has outlined their vision for the influx of robots, machine vision and AI in three levels. Initially, robots will utilize machine vision technology to survey and collect data that provide insight into a variety of environmental factors. A follow-up with more specialized robots will prepare and maintain sites, performing field operations such as weeding, picking, and pruning. Once a suitable location has been established and the land prepared for operations, third-generation robots and autonomous systems will emerge to automate the complete process from seeding to packing. The vision may still be years from coming to fruition, but examples of second-generation robots are already in the works. Take the California wine industry for example, where robots and irrigation technology are working in tandem to make more efficient use of watering practices in the drought-stricken state.

A Robot Workforce Uprooting Global Wine Industries

The vineyard might be one of the last places people imagine the latest advancements in technology are being utilized. However, the applications of machine vision, AI and robots are disrupting the winemaking industry to such an extent that the benefits of their incorporation are too great to ignore.

Take California’s wine country for example. In a state with one of the most sizable wine industries in the world, while at the same time plagued with drought, innovators have answered the call with Robot-Assisted Precision Irrigation Delivery (RAPID). RAPID uses precision monitoring technology to deliver water through specialized emitters attached to irrigation lines laid throughout a vineyard.

The project was funded with a $1 million grant from the Department of Agriculture and headed by UC Merced professor Stefano Carpin. The unmanned ground vehicle, equipped with GPS can map routes throughout vineyards.  Relying on drone and satellite images, the vehicle will have a continuous view of weather conditions in real-time. Additionally, the robot will utilize a “grasping hand” to turn the water emitters in such a way that increases or decreases the flow of water. This system improves on current irrigation setup considerably, since the irrigation watering systems currently in use deliver a constant flow of water across the entire system. With the more efficient use of water in the drought-stricken region, RAPID can improve vineyard yield, reduce waste, and even customize the watering process dependent on a vineyard’s variety of grape. Carpin hopes to have a fully functional test system available by 2020.

Pruning With Precision Machine Vision Systems

Wall-Ye V.I.N is France’s answer to one of the most labor-intensive components of running a vineyard, vine maintenance. Pruning, de-suckering, and clipping fruitless shoots, to be more precise. Understanding which vines need to be pruned and to what extent is considered a sacred charge in many winemaking circles. Nevertheless, tremendous advancements are being made to automate these laborious and time-consuming tasks. The creation of Burgundy-based inventor Christophe Millot, Wall-Ye V.I.N. has even the most scrupulous winemakers nodding in approval as the economic value the robot presents is undeniable. Take for instance the human component of pruning. It takes somewhere near three years to fully train a pruner to man the vines, whereas Wall-Ye V.I.N. promises to be ready to prune in a fraction of the time. Still, the robot will not be capable of taking on all the associated tasks its human counterparts are responsible and instead take on a collaborative role within the industry.

Meanwhile, across the globe, California-based Vision Robotics is hard at work creating a system considerably larger than Wall-Ye V.I.N. to tackle pruning tasks. Canterbury University’s Australia campus is developing a similar pruning robot as well. All three projects have a common imaging system that feeds into an AI that is focused on 3D modeling to determine which vines make the cut.

Robotics Helping Harvest

Once the cluster-heavy vines are ready for harvest, machine vision, AI and robotics come into play in a big way. Identifying which grapes are going to produce the best wine is an arduous task made easy with the incorporation of modern technology. Take for instance the robots in place at the Hall Vineyards in Napa Valley, California. Once grape clusters have been harvested, the robots on location are fed the clusters to identify which make the cut and which do not. Taking more than 10,000 photos a second, the robot is capable of conducting an analysis of each photograph virtually instantly.

The operator inputs what parameters the robot uses to identify what is acceptable and what is not. All acceptable specimens proceed to a “good fruit” bin at the end of a conveyor, while the rejected fruit is blasted off the line with a precision burst of air. This is just one way that robots improve the efficiency of vineyard operations and enable winemakers to create even more satisfying beverages.

Another example emerges in the EU project, VineRobot, that uses color cameras, Infrared thermography, and GPS techniques to obtain agronomical and physiological data from the vineyard in real-time. By combining all the necessary data to reach a conclusion, VineRobot is able to alert operators and other staff of nutrient deficiencies in plants and ready for harvest fruits based on pigmentation. Throughout the process, Vine Robot is actually creating a complete representation of the vineyard’s crop-quality as a result of the data gleaned from its sophisticated vision sensors and systems.

About Encompass Solutions

Encompass Solutions is a business and software consulting firm that specializes in ERP systems, EDI, and Managed Services support for Manufacturers and Distributors. Serving small and medium-sized businesses since 2001, Encompass modernizes operations and automates processes for hundreds of customers across the globe. Whether undertaking full-scale implementation, integration, and renovation of existing systems, Encompass provides a specialized approach to every client’s needs. By identifying customer requirements and addressing them with the right solutions, we ensure our clients are equipped to match the pace of Industry.


‘Daisy’, the disassembling robot, is Apple’s answer to reclaiming the valuable materials that go into the creation of nearly every Apple mobile and tablet device. Aluminum, cobalt, gold, silver, platinum, and many other valuable metals and rare earths can be extracted using this useful technology. Daisy’s efficiency is punctuated by an exceptional rate of recovery, disassembling and collecting materials from outdated devices at the rate of 200 iPhones per hour.

Not only has the recycling robot enabled Apple to recover these valuable resources, but the machine’s creation process has yielded valuable information on recycling many of these materials in a cleaner and more efficient manner. New processes have eliminated the need to introduce contaminants and other dangerous substances into the recycling process, leading to an unsullied recycling process all around.

Daisy is not the first iteration of a mechanical recycler, though. Liam was the predecessor announced by Apple back in 2016. A very specialized robot, Liam was designed to specifically disassemble iPhones to access the recyclable materials inside. Some of the crucial components Liam sought, and Daisy seeks, out in each iPhone carcass include cobalt and lithium from the phone’s battery, gold and copper from the camera, silver and platinum from the device’s logic board, and aluminum from the enclosure.

Additional Charity With Apple’s GiveBack

Alongside the Daisy announcement, which coincided with Earth Day, Apple has announced the commitment to match customers turn-ins of devices with charitable contributions to the Conservation International environmental non-profit based in Virginia until April 30th. Some devices being turned in will even knab in-store gift cards and credit for those donating.

The press release detailing Daisy and Earth Day campaigns from Apple, along with all media associated with the announcement, can be found in the Apple Newsroom.

Cleaner Streams Of Recycling With Material Recovery Facilities

Daisy’s announcement is just one of the many emerging advancements taking place in the world of Material Recovery Facilities (MRFs). The recent WasteExpo 2018 in Las Vegas highlighted many of the recent advancements in the world of cleaner recycling and material recovery in electronics recycling, cleaning and sorting equipment, and municipal recycling endeavors.

Robotics and Artificial intelligence, in particular, are assuming significantly larger roles in the advancement of recycling efforts, enabling greater efficiencies in:

  • Heavy lifting
  • No deviation due to fatigue
  • Repetitive tasks
  • Continuously high levels of concentration
  • Purity rates and consistent and accurate identification of products
  • Pre-emptively tracking and managing work
  • Maximal operating time
  • Evolutive identification of products and more meaningful data
  • Reproductivity of results
  • Reduced labor and training
  • Lower operating costs

While these innovations in the field of material recovery have enabled companies like CleanRobotics, and AMP Robotics to function with greater efficiency, difficulties remain within the variety of materials flowing into the recycling stream. The resounding answer to the challenge emerges again and again with machine vision.

Material Recovery Pushes Advances In Machine Vision Systems

Coupled with robots on recycling conveyor systems, machine vision systems identify elements and materials according to a number of characteristics. Once identified, the robotic component will employ suction, grippers, and grabbers to remove materials from the conveyor and sort them accordingly, for either direct recycling or further disassembly, if necessary. Eagle Vision and Bulk Handling Systems are two entities addressing the need for more robust machine vision systems in MRFs.

Coupled with robots on recycling conveyor systems, machine vision systems identify elements and materials according to a number of characteristics.

Interconnectivity between MRF system components, read as the Industrial Internet of things (IIoT), allows devices to “speak” with one another across the facility. The results can be presented as simply as “I’m getting too much plastic” to which screens can be adjusted to narrow or expand the flow of specific materials, according to Nathanaël Lortie, co-founder and president of Eagle Vision.

With accuracy rates reaching upwards of 85 -95 percent, these robots and their associated systems far surpass the publics shoddy-by-comparison 30 percent accuracy.

About Encompass Solutions

Encompass Solutions, Inc. is an ERP consulting firm that offers professional services in business consulting, project management, and software implementation. Whether undertaking full-scale implementation, integration, and renovation of existing systems or addressing the emerging challenges in corporate and operational growth, Encompass provides a specialized approach to every client’s needs. As experts in identifying customer requirements and addressing them with the right solutions, we ensure our clients are equipped to match the pace of Industry.


In the wake of the recent Uber autonomous vehicle collision, Uber Technologies Inc. has decided to halt its field testing of autonomous vehicles in cities like, Pittsburgh, San Francisco, Toronto, and Phoenix after one of its autonomous vehicles struck and killed a pedestrian in Tempe, Arizona. It has been reported that the pedestrian stepped in front of the autonomous vehicle suddenly, which will likely take focus as authorities continue investigating the incident that occurred Sunday night.

What is known at this point is that the woman had crossed the street outside of a crosswalk when she was struck. At the time of Uber autonomous vehicle collision, a human safety driver supervising inside the cabin of the vehicle, who said the incident occurred “like a flash”. The supervisor also reported their first indication of the collision was the sounds of the collision itself. Some experts following the industry closely expressed significant alarm when it was revealed no braking or swerving maneuvers were enacted to avoid the collision. The incident took place around 10 pm local time, at which point the pedestrian was taken by ambulance to a local hospital. She later succumbed to her injuries. According to local authorities, Uber is cooperating fully with the Tempe Police Department during the investigation.

Uber Autonomous Vehicle Collision Industry Effects

The incident has far-reaching implications for the highly-scrutinized societal integration of autonomous vehicles, which incorporate machine vision and AI in hopes of revolutionizing the auto industry, how we travel, and how urban population centers function. Already, tens of billions of dollars and years of research have been invested in the technology by companies like Alphabet Inc., General Motors Co., and Baidu Inc.

Despite what the news would have you believe in the wake of like the one used during the Uber autonomous vehicle collision, relatively great strides have been made in the last five years regarding autonomous vehicles. This is due largely in part to a rather relaxed regulatory system surrounding the technology. The Department of Transportation removed significant hurdles in autonomous vehicle testing just last year. Many experts fear the recent event will have a significant impact on the regulatory environment of autonomous vehicles moving forward. In contrast to DoT actions, The National Transportation Safety Board has begun its own investigation into the matter and sent a small team of investigators to Tempe for a closer look, in addition to advocating for stricter policies on autonomous vehicles.

Machine Vision Is The Foundation Of Autonomous Vehicles

Despite all the attention, autonomous vehicles have enjoyed in recent years, the concept is far from new and the vision technology that powers the concept has been in development since the 60’s. SRI’s Shakey was the first mobile robot to utilize a computer vision system for navigating terrain and obstacles. The JPL robot and the Stanford Cart followed as the technology became more refined, though none could compare with the level of machine vision and AI systems utilized by autonomous vehicles today.

More modern vision systems would emerge, such as the Ralph vision system, to help automobiles navigate using sampled images that assessed road curvature and determined the positioning of a vehicle relative to a roadway’s center. Today, LIDAR, a light-based radar vision, is the system of choice. By utilizing a series of “eyes” embedded within the autonomous vehicle’s body, it functions using sensors that send out pulses of invisible light from lasers and records how long it takes to receive a signal back from the surface of an object, in this case, the light’s reflection. Camera-based methods are also utilized as autonomous vehicle vision systems. These systems help create 2D and 3D images of objects just like a human eye would. In this case, multiple cameras are positioned around the vehicle to provide a full field of vision that mimics that of a real driver. Specialized software is then used to “learn” human behavior and model the objects adjacent to and within the roadway.

A third vision approach to autonomous vehicle systems emerged in late 2017 from Israeli start-up AdaSky. The AdaSky system utilizes Far Infrared (FIR) perception that the company has coined as Viper.

The technology is not new, but AdaSky CEO Avi Katz claims they are the first to apply the technology to autonomous vehicle vision systems in hopes of avoiding incidents like the Uber autonomous vehicle collision. The system is built to complement peripheral vision systems and sensing technology like LIDAR, radar, and cameras. According to Katz, the increased capacity for classification, identification, and detection of objects enables autonomous vehicles to better understand and interact with their surroundings, making them much safer as a result.

Machine Vision Beyond The Roadway

Machine vision systems have become incredibly advanced both on the road and on the assembly line. Powerful software and components identify these objects’ surroundings and enable them to more elaborately interact with their environments. The machine vision systems in use today largely govern image-based inspection, process control, and robot guidance. While autonomous vehicles and futuristic technologies garner the lion’s share of attention from the media and the general population, Machine vision already has an incredible impact on our daily lives. Indeed, machine vision influences many aspects of our daily lives, from the parts in our cars, the appearance and packaging of our food, the quality of our medical devices, and even lab specimens all pass through some sort of machine vision system before they reach their destination.

About Encompass Solutions

Encompass Solutions is a business and software consulting firm that specializes in ERP systems, EDI, and Managed Services support for Manufacturers and Distributors. Serving small and medium-sized businesses since 2001, Encompass modernizes operations and automates processes for hundreds of customers across the globe. Whether undertaking full-scale implementation, integration, and renovation of existing systems, Encompass provides a specialized approach to every client’s needs. By identifying customer requirements and addressing them with the right solutions, we ensure our clients are equipped to match the pace of Industry.


Picking and sorting of objects is an activity few humans look forward to with great elation. However, the tedious nature of the task is a prime candidate for automation using robotics. Beyond the obvious hardware that it takes for a robot to operate within the confines of a designated task, a less obvious one, machine vision, acts as a critical component of efficient robotic sorting. The technologies involved with machine vision, sensing, and object interaction are already being used by robots with great success on the International Space Station in completing even complex tasks, semi-autonomously.

Robotics labs around the world are hard at work refining the technology for applications in factories, warehouses, and even relief efforts in disaster areas. The environment in each example is likely one with an abundance of clutter as well as rife with objects of varying size, weight, and orientation. This is a perfect setting to test and apply the advancements of machine vision and object interaction.

a picture of machine vision in action on an assembly line.

Machine Vision Is Increasingly Relied Upon In Automation To Drive Quality And ROI. The Market Is Expected To Reach $18.7 Billion By 2022.

Components of Machine Vision

Machine vision has significant capabilities on factory floors and production lines. As systems acquire product images and extract the relevant information, the information is analyzed and communicated to the outside world. A lot goes into the technology behind machine vision, which can be broken down into five essential components: lighting, lenses, vision processing, image sensing, and communications.

Machine Vision Lighting

Lighting is essential to the success of machine vision results. By capturing images through analysis of reflected light, machine vision systems can effectively identify objects as well as their orientation in an environment. Several lighting techniques can be utilized by machine vision systems, including backlighting to measure external and edge measurements, structured lightning patterns to interpret angles on an object’s surface, and strobe lighting to freeze moving objects for examination or aid in countering blurring. These are only a few examples of the lighting techniques utilized in machine vision systems, which can also incorporate diffuse dome, bright-field, dark-field, and axial diffuse lighting. A more comprehensive guide to machine vision lighting can be found in this whitepaper from National Instruments.

Machine Vision Lenses

Just as in conventional cameras, lenses capture an image and deliver it to sensors within the camera. One can also think of this in terms of our eyes delivering the images we see to our brains for interpretation. Fixed and interchangeable lenses are the most common types of lenses in machine vision systems. Lenses of varying sizes and shapes are used to capture the most precise image for the system’s intended use. Fixed lenses are typically standalone components and can autofocus based on mechanical adjustment or as a fluid lens that automatically adjusts to deliver the highest quality. These lenses have a fixed field of view from a certain distance. On the other hand, interchangeable lenses are typically equipped with C-mounts or CS-mounts that allow them to be removed or attached at will to the systems they are enhancing. Vision Systems Design does an excellent job detailing the fundamentals of machine vision lenses in this article.

Machine Vision Image Sensors

An essential component of image capture, image sensors utilize a charged couple device (CCD) or a complementary metal oxide semiconductor (CMOS) to interpret light as electrical signals. In more easily digestible language, image sensors capture the reflected light from an image and make sense of the object, interpreting it as a digital image with precise details that aid in accurate measurements by processing software. A more comprehensive article on image sensors from Coventor can be found here.

Vision Processing Units

A Vision Processing Unit (VPU) is another component of machine vision that serves to extract information from the digital images captured by the cameras being used. The processing undertaken by these microprocessors can be completed externally or internally on a standalone system. A process completed over the course of several steps, images will first be acquired from the sensor and software will identify specific features of an image, including measurements and comparisons to reach a decision based on the result. The results are then communicated to the system to complete additional actions. While it is true that the physical components of machine vision are integral to the overall function of these systems, the processing algorithms that evaluate and compare results are the most influential. Processing software is responsible for configuring camera parameters, pass-fail detection, communicating information to factory floors, and supporting Human Machine Interface (HMI) development.

Machine Vision System Communications

As one might conclude from this brief overview of machine vision, these systems are an amalgamation of parts and components that must all work in unison to deliver accurate results consistently and in real-time. Add to this the fact that environments can change dynamically and at a moment’s notice raises necessity to realign vision on the fly. Communications are an essential component of machine vision systems and are typically executed through discrete I/O signals or via data delivered through a serial connection to a connected device, i.e. Ethernet or RS-232 output. Programmable logic controllers (PLC) are the favored connection made to discrete I/O points, which can control stack lights, solenoids or other indicators to trigger reject responses from the system.

Machine Vision Applications In Production Processes

As mentioned above, machine vision is already hard at work on the ISS. However, more candid applications exist right here on Earth. Industrial inspection is among the largest industries utilizing machine vision. However, many applications, such medical imaging, remote sensing, and autonomous vehicles use machine vision on a continuous basis.

Sorting

Machine vision applications in warehouse and factory conditions are exceptional at mitigating the amount of human error that can affect repetitive processes, such as bin-picking and sorting tasks. The technology allows for robots to make sense of a cluttered workspace or full bin and extract the relevant objects appropriately.

Assembly Verification

When quality is of the utmost importance, consistent outputs on the assembly line are necessary to a company’s bottom line. Inspection operations in assembly verification are completed in milliseconds to ensure that every item is up to spec and incomplete products don’t make it past the check.

Automation

Automating production is an essential function of manufacturing operations. Machine vision can assist in detecting system abnormalities, jams, and other hiccups that can affect the production process. Improving the consistency of operations ensures little interruption and a reduction in production costs that result from downtime.

Removing Defects

Automation demands two things: consistency and simplicity. When things get complicated they tend to also get expensive. Machine vision systems are capable of inspecting hundreds of items per minute to a high degree of accuracy, thus ensuring that defective items are removed from the equation before they can affect a business’ bottom line, without the need for complex systems of checks throughout the production process.

Identification

The ability to scan barcodes and other identifying under several difficult conditions, be it lighting, texture, or packaging, is essential to keep operations running smoothly. Machine vision systems help achieve optimum efficiency in quickly reading necessary labeling information on the production line and in distribution centers.

About Encompass Solutions

Encompass Solutions is a business and software consulting firm that specializes in ERP systems, EDI, and Managed Services support for Manufacturers and Distributors. Serving small and medium-sized businesses since 2001, Encompass modernizes operations and automates processes for hundreds of customers across the globe. Whether undertaking full-scale implementation, integration, and renovation of existing systems, Encompass provides a specialized approach to every client’s needs. By identifying customer requirements and addressing them with the right solutions, we ensure our clients are equipped to match the pace of Industry.


Space Robots

With the successful launch and reentry of Tesla’s Falcon Heavy rocket, now is an excellent opportunity to talk about space robots, machine vision, and their roles in expanding space research and exploration.

 

a still of the starman mannequin and tesla roadster with earth in the background from the SpaceX.com live feed..

A Stunning View Of Earth Captured Following The Falcon Heavy Launch. Photograph: SpaceX.com Live Feed.

 

Space robots. Emulated after us in terms of morphology and size, they are superior to industrial robots when it comes to versatility and capability. While right now they may not look as advanced or operate as nimbly as their representations in sci-fi features from the Star Wars and Star Trek franchises, that gap is quickly shrinking. Taking on repairs and other tasks deemed too dangerous for astronauts, these specialized robots are the obvious candidates for many of the precarious activities taking place beyond the relative comfort of Earth.

Space Robots: R2 Goes To The International Space Station

The first humanoid robot in space, Robonaut 2, R2 for short, was developed by the Dextrous Robotics Laboratory at Johnson Space Center (JSC) in Houston, Texas. R2 emerged earlier this decade as the latest subject of robotics research in space. Originally consisting of an upper only torso and arms, R2 has now been equipped with two climbing manipulators, read as legs, capable of providing mobility in zero-g environments to complement dexterous arms and digits that handle intricate tasks. R2’s evolution is a marvel for researchers and enthusiasts to behold, but what’s more impressive than the achievements made over its predecessor, R1, are the advanced sensing capabilities that allow R2 to truly perform in one of the most challenging environments imaginable.

Space Robots: A picture of Robonaut 2 aboard the international space station.

Space Robots: Robonaut 2 Working Tirelessly Aboard The International Space Station.

Machine Vision, Sensing, And Perception

The abilities to touch and see are perhaps the most extraordinary components of these robots’ capability. Vision and sensing components relay complex sets of data such as the identity, position, and orientation of objects in an image. Powerful industrial machine vision and process guidance systems are allowing next-generation robots the ability to evaluate and react effectively in real-time.

Without the component of machine vision, robots are little more than extensions of their controllers and the setpoints governing automated tasks. In R2’s case, 3D vision is the component of machine vision that allows it to perform complex tasks in a semi-autonomous fashion. R2 is both capable of remote control by operators and semi-autonomous operation using advanced software that lets R2 “think” of the solution to a given task. Software updates regularly expand the depth and breadth of R2’s capability. R2’s vision is governed by five cameras in all. Two to provide stereo vision for the robot and its operators, and two auxiliary cameras for backup use. The component of stereo vision allows images from two vantage points to be compared, effectively allowing R2 – and us – to see in 3D. A fifth infrared camera is contained within the mouth area to aid in depth perception. All vision components are housed within the cranium, while R2’s “brain” is located within the robot’s torso. R2 can look up and down, left and right, to fully gauge its surroundings.

 

Space Robots: a picture of R2 with legs attached

Space Robots: R2 Equipped With Two Climbing Manipulators, Read As Legs, Capable Of Providing Mobility In Zero-G Environments.

 

A prime example of cooperative robotics at work, R2’s ability to interact with the astronauts on the ISS mimics the way another person might. Operating at a pace relative to humans, R2 has a soft, padded skin that is equipped with sensing systems that allow it to react when encountering a person. Force control is provided by torsion springs inside the robot and allow R2 to react to influence from the environment. So, when a person pushes away an arm, R2 gives to the motion and lets the person by. This sensing capability also provides R2 with continuous awareness of its orientation and the location of its limbs relative to the environment and surrounding people.

Object Interaction

As for Robonaut’s interaction with its environment, its hands work a bit differently than both humans’ and industrial robots’. The key difference resides in R2’s tendon-driven robotic fingers. Typically, robots will control their joints via tension controllers located on each tendon individually. Putting it another way, joint torque translates into tendon tension.  This poses a problem in the case of R2. The resulting disturbances between joint displacement and the tendon had to be addressed for R2 to be able to interact with unfamiliar objects in an array of positions. This is in stark contrast to R2’s industrial cousins, which operate in uniform spaces with familiar objects. The solutions to R2’s dilemma came when NASA and GM engineers devised a joint-based torque control method that decouples the tendon. All this talk about torque is of particular importance for R2, as well as many other humanoid-robots, due to the necessity for adaptable grip when interacting with a variety of objects large and small.

Space Robots: A picture of Robonaut 2 holding different tools

Space Robots: Robonaut 2 Is Capable Of Working With An Array Of Tools. Photographer: Robert Markowitz.

What’s Next For The ISS And Non-Human Crewmembers

The most recent iteration of Robonaut coming from Houston’s JSC is R5, or Valkyrie. Built to compete in the 2013 DARPA Robotics Challenge (DRC) Trials, the design of Valkyrie took place over a 15-month period and improved electronics, actuators, and sensing capabilities based on earlier generations of JSC humanoid robots. In particular, R5’s vision and sensing system improvements are a tremendous advancement over than those found in R2. Valkyrie’s redesigned head sits on a neck possessing three degrees of freedom and features a Carnegie Robotics Multisense SL, a tri-modal (laser, 3D stereo, and video), high-resolution, high-data-rate, and high-accuracy 3D range sensor, as the main perceptual sensor. Additional modifications include infrared-structured light point cloud generation beyond the laser and passive stereo methods already implemented, as well as front and rear “hazard cameras” positioned in the torso.

Space Robots: A picture of robonaut 5 with hands on hips.

Space Robots: The Latest Iteration Of Robonaut, Robonaut 5, Is Also Referred To As Valkyrie And features The Latest Tech In Robotics For Space Applications.

As research advances technology here on the ground, components and software can be sent to the ISS for utilization. Once proven to operate effectively on the ISS, NASA and other robotics laboratories hope that innovative robotics and associated technologies can be applied further in the depths of space. In the future, thermal resistance for robots will likely be a main focal point for researchers.

About Encompass Solutions

Encompass Solutions is a business and software consulting firm that specializes in ERP systems, EDI, and Managed Services support for Manufacturers and Distributors. Serving small and medium-sized businesses since 2001, Encompass modernizes operations and automates processes for hundreds of customers across the globe. Whether undertaking full-scale implementation, integration, and renovation of existing systems, Encompass provides a specialized approach to every client’s needs. By identifying customer requirements and addressing them with the right solutions, we ensure our clients are equipped to match the pace of Industry.