Robotic VISION Technologies applies for ten patents to innovate software for Vision Guided Robotics
BLOOMFIELD HILLS, Mich. (Nov. 17, 2015)—Robotic VISION Technologies Inc. (RVT) has applied for ten new patent families in the vision guided robotic automation software field with the World Intellectual Property Organization and the U.S. Patent & Trademark Office.
“These are 21st century inventions that will make major contributions to the ‘Robotic Revolution’ and stimulate automated manufacturing industries worldwide,” RVT Founder and Chief Executive Officer Rick Weidinger said. “Our technology team has been working on these innovations for the past several years with the assistance of our IP partner, the Invention Development Fund. These exciting new technologies have valuable applications in vision guided robotics and machine vision.”
RVT’s patent portfolio includes six issued patents in addition to the ten newly pending patent families. RVT was the first to ever be granted a single-camera 3D patent. The company has two issued patents for single-camera 3D, three for Random Bin Picking™ and one patent for Structured Light 3D vision. RVT is a recognized leader in the field of vision with its trademarked eVisionFactory™ (eVF™) vision guidance software platform.
“Adding these ten new patents will both bolster our existing portfolio and introduce new vision products into the market, which will keep us at the front of vision guided robotics technology,” Weidinger said
Beneath are summaries of the ten new patent families.
- Auto-detection and feature finding: Automatic feature detection and tracking of moving parts allows RVT’s software to learn the shape and features of a part and then use that knowledge to automatically handle the part even in motion, thus allowing a robot to perform its duties on a new or modified part even when that part moves at a constant or variable rate.2. Sensor-based safety feature for robotic equipment: Light-based sensors such as cameras are used in robotic automation systems where humans may occasionally or even regularly be present. Using a model of the robot’s environment generated using these sensors, a monitoring process is used to detect whether an object or person might be present that would be endangered by the robot’s actions. This collaboration technology adjusts automatically the movement of the robot depending on the detection and location of the object or person. This monitoring can be piggy-backed on an existing vision system for the robot, or employed as a second safety system to protect both robot and human worker from dangerous contact in the workplace.3. Three-dimensional imaging sensor calibration: Some of the most challenging 3D machine vision applications require careful calibration of a 2D imaging system with a reference optical signal that provides depth information, such as a laser line projector or range-finder. This depth information needs to be combined with other information from the imaging sensor to correlate the depth information with the imagery gathered by the sensor in 2D. Novel laser patterns are also provided in this technology that enhances the way depth information is calibrated to the 2D image, and allow for completely automatic calibration of the depth information to the image.
4. Orientation-based hashing for fast item orientation sensing: An important advantage of computer vision systems especially when coupled with robotics is that they be able to adapt to an orientation of a part or parcel that is different than expected. Quickly determining a current orientation of a part or parcel, which may be different than expected, by comparing the part or parcel’s current orientation against known orientations decreases cycle time and can accelerate vision systems, whether inspection or guidance oriented. Interestingly, this general purpose use of hash tables to provide this quick orientation look-up is not limited to robotics, but can be core to any vision system.
5. Distance determination between RFID tags: Using two or more RFID tags allows for an additional position information to be provided to guidance systems, including robotic guidance systems. The tags can be a mixture of active and passive tags, and can be activated by external RF fields removing the need for on-tag power as desired. In particular, a relative distance between the tags is calculated using the minimum required amount of signal strength, yielding an efficient method of ranging using inexpensive tags. This distance can then be used with guidance technologies.
6. Foreground extraction from compressed video: Compressed video streams are a frequent source of image data used for high speed vision systems. Unfortunately, it is often necessary to completely decompress and process the video stream in order to differentiate between background features that may not be of interest and foreground features that are. The innovative technology allows for the extraction of foreground content of a compressed video stream directly from the compressed video, allowing the background to be ignored as desired. These foreground features can then be passed on to other machine vision techniques.
7. Stable grasp point selection for robot grippers: Traditional robot programming requires a pre-determined point to grasp an object to be worked on by that robot, such as using a gripper or suction-based end effector. While simple, this method cannot be easily adapted on the fly by intelligence added to the robot such as from a vision system. The innovative technology utilizes feedback from a 3D vision system or ultrasound measurement to select grasping points on an object for a gripper from candidate grasping points based on the part’s orientation and/or possible deformation.
8. Detecting incidental robot-human contact via physiological tremor: All vertebrate life creates small tremors in its body by biological processes such as muscle twitch, respiration, circulation, and both voluntary and involuntary motion of skeletal muscles; humans are no exception. By applying sensors to a robot that can detect these tremors, it is possible to slow or arrest the motion of a robot used in machine-human collaboration to prevent injury to humans or animals by the robot. The innovative technology is not dependent on the type of robot, allowing for flexible use in any dynamic machine-human interaction.
9. Alignment markers to facilitate detection of object orientation and deformation: Some parts are particularly difficult to recognize and handle by vision systems, especially parts that might be flexible, crushable, or devoid of features. Simple markers on these parts don’t provide as much information as possible about the alignment of the part, or whether it might be flexed or deformed. By using special orientation markers this feature allows a vision system to not simply see a part but learn about its size, shape, alignment, and detect potential deformation. This technology is particularly important when handling soft objects such as in food handling applications.
- 3D object rotation-based mechanical parts selection through 2D image processing: Although RVT’s vision algorithms are very tolerant to changing illumination, rarely individual images are taken that are simply too dark to use without extra processing. By combining multiple 2D images together that may not contain enough information individually to match a 3D object model unique processing steps allow for learning of object identity and orientation where simpler recognition processes would fail. Pre-generation of models for the rotation of parts can be used to make this new recognition process with shorter cycle times when needed.
“Performance by vision algorithms already in our best-of-class eVF solution enable challenging 3D vision problems around the world,” Robotic VISION Technologies Chief Technology Officer Dr. John Brownlee said. “Armed with these inventions and new product offerings, RVT is continuing to outperform in the lab and the customer site with our driving passion for technical excellence.”
For further information, go to www.RoboticVISIONTech.com.
About Robotic VISION Technologies Inc.
Robotic VISION Technologies Inc., a privately held machine vision technology company with complete 2D to 3D software vision solutions for state-of-the-art machine automation applications, is a recognized leader in the field of Vision Guided Robotics (VGR). eVisionFactory™ (eVF™) is the brand name of the RVT vision guidance software platform and robot operating system that allows robots to “See, Think, and Do.” RVT’s software and technologies enable image recognition, machine vision, machine learning and robot guidance processes in industrial and non-industrial markets. RVT’s main 3D vision product features the Henry Ford Technology Award-winning single-camera 3D software product in addition to its proprietary validation tools – AutoCal™, AutoTrain™ and AccuTest™. In addition, RVT has won the prestigious BAE Systems Chairman’s Award for its technology and governmental work efforts. A sample of customers using RVT’s products include Ford, General Motors, Chrysler, Honda, Toyota, Nissan, Harley- Davidson, Boeing and Johnson & Johnson. RVT’s eVF™ software platform is installed in hundreds of systems worldwide and operates every day controlling over one-half billion dollars of capital equipment. RVT’s Technology Center & Robotic Lab is located in Bloomfield Hills, Michigan. RVT has secured agreements with Intellectual Ventures and Jacobs Technology Inc. for invention growth and field support, respectively. RVT is building a second robotic lab in the Washington DC area in order to serve markets outside of the automotive industry which will open in early 2016.
About Intellectual Ventures
Founded in 2000, Intellectual Ventures (“IV”) is the global leader in the business of invention. IV collaborates with leading inventors, partners with pioneering companies and invests both expertise and capital in the process of invention. With more than $6 billion of committed capital and more than 40,000 IP assets in active monetization programs, IV owns one of the world’s largest and fastest-growing intellectual property portfolios which is licensed to the world’s most innovative companies. To date, IV has secured more than $3 billion of cumulative licensing revenue. IV’s mission is to energize and streamline an invention economy that will drive innovation around the world. For more information about IV, please visit www.intellectualventures.com. For more about IV’s Invention Development Fund, please visit www.inventiondevfund.com.
Media/Public Relations Director
Robotic VISION Technologies Inc.