Sabtu, 07 Maret 2015
Omnivision to Settle With Class Lawsuit
Courthouse News Service: A federal judge preliminary approved a $12.5M class settlement that claims that OmniVision misled shareholders into believing it had an exclusive contract related to the Apple iPhone 4. Shareholders claimed that the company CFO Anson Chan made false and misleading statements about the technological advantages of its image sensor versus its competitors', and its competitive position with customers such as Apple. These misrepresentations deceived the market into believing that OmniVision was maintaining its exclusive position as the supplier of image sensors for iPhone 4, causing Omnivision's stock to be artificially inflated, according to the lawsuit.
Jumat, 06 Maret 2015
Messages on Sony CCD Discontinuation
Vision Systems Design publishes a nice collection of various companies announcements of Sony stopping CCD production. Among them are Framos video, IDS announcement, and a private email from Vision Components saying that just one CCD model, the B&W 1.4MP, 6.45um pixel ICX285, escaped the axe so far:
Imaging Remains Major Power Consumer in Wearables
EETimes: TechInsights and eSoftThings analysed power consumption of 3 smart glasses: Google Glass, Vuzix M100, and Optinvent AR Glasses. Imaging functions consume the major power in all three devices:
Kamis, 05 Maret 2015
Rambus Launches Partners-in-Open-Development Program to Promote Lensless Sensors
Rambus announces its Partners-in-Open-Development (POD) program in collaboration with design firms frog and IXDS to promote the adoption of its lensless smart sensor (LSS) technology.
"Our Partners-in-Open-Development provides a great opportunity for innovators to develop technology for a smarter world. With this program, we encourage developers across the globe to join in to create new products that introduce real-world IoT applications that will touch so many parts of our lives. This is the first step toward expanding the accessibility of an ecosystem that will foster a new generation of innovation," said Gary Bronner, VP of Rambus Labs.
"Our work with lensless smart sensors through the POD program is helping to pave the way for innovations that are sure to inspire and become ubiquitous throughout our lives," said Andrew Zimmerman, president of frog. "The opportunity to partner with Rambus, through our R&D platform frogLabs and other organizations, to build out the burgeoning sensor ecosystem means we are helping to propel solutions that can be adopted much more quickly."
"Opening its innovative lensless smart technology to the developer community enables Rambus to promote what promises to be a robust, intelligent ecosystem around this new approach to sensing and imaging," said Dr. Reto Wettach, founder and design director with IXDS. "We are proud to be among the inaugural POD partners and look forward to being among the first contributors to identify and expand upon existing IoT-based applications for this technology."
Tom's Hardware publishes a nice article accompanied by a Youtube interview with Patrick Gill on the new developments:
"Our Partners-in-Open-Development provides a great opportunity for innovators to develop technology for a smarter world. With this program, we encourage developers across the globe to join in to create new products that introduce real-world IoT applications that will touch so many parts of our lives. This is the first step toward expanding the accessibility of an ecosystem that will foster a new generation of innovation," said Gary Bronner, VP of Rambus Labs.
"Our work with lensless smart sensors through the POD program is helping to pave the way for innovations that are sure to inspire and become ubiquitous throughout our lives," said Andrew Zimmerman, president of frog. "The opportunity to partner with Rambus, through our R&D platform frogLabs and other organizations, to build out the burgeoning sensor ecosystem means we are helping to propel solutions that can be adopted much more quickly."
"Opening its innovative lensless smart technology to the developer community enables Rambus to promote what promises to be a robust, intelligent ecosystem around this new approach to sensing and imaging," said Dr. Reto Wettach, founder and design director with IXDS. "We are proud to be among the inaugural POD partners and look forward to being among the first contributors to identify and expand upon existing IoT-based applications for this technology."
Tom's Hardware publishes a nice article accompanied by a Youtube interview with Patrick Gill on the new developments:
NHK and Forza Silicon Present 133MP/60 fps Sensor Internals
Business Wire: Forza Silicon announces that researchers at NHK presented the design architecture and specifications of the 133MP 60fps CMOS image sensor at ISSCC 2015. The image sensor presented by NHK was designed by Forza Silicon and fabricated using a 0.18 µm 3.3V/1.8V process with 1D stitching.
To date conventional image sensors for 8K applications have used 8 MP and 33 MP solutions in large optical formats. In order to eliminate the bulky lens/color-prism optical system of previous generation cameras, the team developed a single-chip 133 MP image sensor. The sensor takes advantage of Forza Silicon’s Gen 3 readout architecture to achieve frame frequency of 60 fps. The Gen 3 readout architecture uses a pseudo-column parallel design with 14b redundant successive approximation register ADCs to achieve a throughput of 128 Gb/s at full resolution and frame rate.
“Our continued partnership with Forza Silicon through the years to support NHK has resulted in the success of a number of significant projects such as the development of the 133 MP sensor, and previously the 33 MP Super Hi-Vision image sensor. Forza’s dedicated support and its image sensor design expertise enabled us to achieve the Super Hi-Vision 8K single-chip camera — the largest pixel count of any video image sensor,” said Dr. Hiroshi Shimamoto, senior research engineer at NHK Science & Technology Research Laboratories (STRL).
“The advanced research and development initiatives by NHK continue to push the boundaries for UHDTV broadcast experiences. NHK’s next-generation digital broadcast systems stem from their long heritage as the world’s premier R&D center for broadcast camera technology. The groundbreaking technologies we’ve jointly developed have evolved over a span of 10 years as a result of our tight collaboration, and Forza’s decades of design experience and wide selection of silicon-proven IP,” said Barmak Mansoorian, president & co-founder at Forza Silicon.
To date conventional image sensors for 8K applications have used 8 MP and 33 MP solutions in large optical formats. In order to eliminate the bulky lens/color-prism optical system of previous generation cameras, the team developed a single-chip 133 MP image sensor. The sensor takes advantage of Forza Silicon’s Gen 3 readout architecture to achieve frame frequency of 60 fps. The Gen 3 readout architecture uses a pseudo-column parallel design with 14b redundant successive approximation register ADCs to achieve a throughput of 128 Gb/s at full resolution and frame rate.
“Our continued partnership with Forza Silicon through the years to support NHK has resulted in the success of a number of significant projects such as the development of the 133 MP sensor, and previously the 33 MP Super Hi-Vision image sensor. Forza’s dedicated support and its image sensor design expertise enabled us to achieve the Super Hi-Vision 8K single-chip camera — the largest pixel count of any video image sensor,” said Dr. Hiroshi Shimamoto, senior research engineer at NHK Science & Technology Research Laboratories (STRL).
“The advanced research and development initiatives by NHK continue to push the boundaries for UHDTV broadcast experiences. NHK’s next-generation digital broadcast systems stem from their long heritage as the world’s premier R&D center for broadcast camera technology. The groundbreaking technologies we’ve jointly developed have evolved over a span of 10 years as a result of our tight collaboration, and Forza’s decades of design experience and wide selection of silicon-proven IP,” said Barmak Mansoorian, president & co-founder at Forza Silicon.
Toshiba Announces Mass Production of 20MP, 1.12um Pixel Sensor
Business Wire: Keeping up with its production schedule in the early product announcement, Toshiba starts mass production of the T4KA7, a 20MP, 1/2.4-inch CMOS sensor based on 1.12um BSI pixels. The sensor makes possible the realization of 20MP mobile camera modules with a height of 6mm or less.
IS Auto 2015 Speakers
Smithers Image Sensors Auto conference to be held on June 23-25 in Brussels, Belgium announces a list of confirmed speakers including:
Also, Image Sensors publishes an interview with Markus Rossi, Chief Innovation Officer, Heptagon on the company's 3D imaging solutions. Few quotes:
"Optimizing image sensors for 3D imaging is an opportunity, since those system typically have different optical requirement and layout. A key aspect of Heptagon’s products their small mechanical form factor. A novel, wafer-based camera assembly technology - called “FCP” - reduces the mechanical complexity of the camera and projection modules and therefore enables high turn-over, passive alignment manufacturing methods."
"One example for a very important component in depth sensing is the so-called IR illuminator. Our miniature illumination systems are optimised for uniform illumination in ToF applications, pattern generators for contrast enhancement in active stereo as well as structured light systems. Each of this systems needs to be tuned for best optical performance (efficiency, contrast, …) and smallest form factor (need to fit into mobile devices)."
- Henrik Lind
Technical Expert
Volvo Car Corporation - Martin Edney
Lead Systems Engineer Rear & Surround Cameras Advanced Driver Assistance Systems
Jaguar Land Rover - Kevin Lu
Global Manager - Optical Engineering, Product Architect - Image Vision Systems
Magna Electronics - Salah Hadi
Global R&D Director Vision & Night Vision Systems
Autoliv - Riccardo Mariani
CTO
Yogitech - Benjamin Stauss
R&D Engineer – Optical Camera Testing
TriOptics
Also, Image Sensors publishes an interview with Markus Rossi, Chief Innovation Officer, Heptagon on the company's 3D imaging solutions. Few quotes:
"Optimizing image sensors for 3D imaging is an opportunity, since those system typically have different optical requirement and layout. A key aspect of Heptagon’s products their small mechanical form factor. A novel, wafer-based camera assembly technology - called “FCP” - reduces the mechanical complexity of the camera and projection modules and therefore enables high turn-over, passive alignment manufacturing methods."
"One example for a very important component in depth sensing is the so-called IR illuminator. Our miniature illumination systems are optimised for uniform illumination in ToF applications, pattern generators for contrast enhancement in active stereo as well as structured light systems. Each of this systems needs to be tuned for best optical performance (efficiency, contrast, …) and smallest form factor (need to fit into mobile devices)."
Rabu, 04 Maret 2015
Taiwan and China CIS Foundries
Digitimes posts an article on Taiwan and China CIS foundries. Few quotes:
"TSMC has been the major CIS module production partner for OmniVision although the module vendor is also outsourcing part of its production to China-based XMC. However, the partnership between TSMC and OmniVision may change in the future as China-based Hua Capital, an investment firm, has offered a bid to acquire OmniVision, indicated the sources.
UMC has tied up with STMicroelectronics to develop 65nm BSI CIS technology and is currently producing BSI products at its Fab 12i in Singapore."
"TSMC has been the major CIS module production partner for OmniVision although the module vendor is also outsourcing part of its production to China-based XMC. However, the partnership between TSMC and OmniVision may change in the future as China-based Hua Capital, an investment firm, has offered a bid to acquire OmniVision, indicated the sources.
UMC has tied up with STMicroelectronics to develop 65nm BSI CIS technology and is currently producing BSI products at its Fab 12i in Singapore."
Teledyne Dalsa Announces New X-Ray Imagers
Marketwired: Teledyne Dalsa new dental Xineos models, including the 1511, 1501 and 2301 utilize Teledyne DALSA's sixth generation radiation-hard CMOS active pixel design with active areas of 15x11cm, 152x7mm, and 228x7mm respectively. The new sensors offer switchable saturation dose, low dissipation power and built-in gain, offset and defect correction.
"With the addition of our new Xineos CMOS X-Ray detectors, we're offering dental equipment manufacturers a complete portfolio of more versatile, more flexible, lower dose and cost effective options to satisfy a wider range of dental practices," commented Dr. Mila Heeman, Senior Marketing Manager at Teledyne DALSA. "Our detectors allow dentists to offer a more precise diagnosis as a result of our continued commitment to improving CMOS X-Ray technology."
Xineos range of X-Ray sensors features high frame rate in range of 30-45fps. Dalsa Youtube video demos X-ray imaging at high speed:
"With the addition of our new Xineos CMOS X-Ray detectors, we're offering dental equipment manufacturers a complete portfolio of more versatile, more flexible, lower dose and cost effective options to satisfy a wider range of dental practices," commented Dr. Mila Heeman, Senior Marketing Manager at Teledyne DALSA. "Our detectors allow dentists to offer a more precise diagnosis as a result of our continued commitment to improving CMOS X-Ray technology."
Xineos range of X-Ray sensors features high frame rate in range of 30-45fps. Dalsa Youtube video demos X-ray imaging at high speed:
ON Semi Shows its OIS Solution
ON Semi demos its OIS solution in this Youtube video. The frequency range of the company's demo covers 1Hz to 10Hz camera shake speed, a bit slow for consumer cameras, but might fit to some other applications:
Mobileye Unveils its 4th Gen Vision Processor
PRNewswire: Mobileye introduces its 4th generation system-on-chip, the EyeQ4, consisting of 14 computing cores out of which 10 are specialized vector accelerators for visual processing and understanding. The first design win for EyeQ4 has been secured for a global premium European car manufacturer for production to start in early 2018. The EyeQ4 would be part of a scalable camera system starting from monocular processing for collision avoidance applications, in compliance with EU NCAP, US NHSTA and other regulatory requirements, up to trifocal camera configuration supporting high-end customer functions including semi-autonomous driving. The EyeQ4 would support fusion with radars and scanning-beam lasers in the high-end customer functions.
"Supporting a camera centric approach for autonomous driving is essential as the camera provides the richest source of information at the lowest cost package. To reach affordable high-end functionality for autonomous driving requires a computing infrastructure capable of processing many cameras simultaneously while extracting from each camera high-level meaning such as location of multiple types of objects, lanes and drivable path information," said Amnon Shashua, cofounder, CTO and Chairman of Mobileye. "The EyeQ4 continues a legacy that began in 2004 with EyeQ1 where we leveraged deep understanding of computer vision processing to come up with highly optimized architectures to support extremely intensive computations at automotive compliant power consumption of 2-3 Watts."
The EyeQ4 provides "super-computer" capabilities of more than 2.5 teraflops within a low-power (approximately 3W) automotive grade system-on-chip.
EyeQ4-based ADAS uses computer vision algorithms like Deep Layered Networks and Graphical Models while processing information from 8 cameras simultaneously at 36fps. The EyeQ4 will accept multiple camera inputs from a trifocal front-sensing camera configuration, surround-view-systems of four wide field of view cameras, a long range rear-facing camera and information from multiple radars and scanning beam lasers scanners. Taken together, the EyeQ4 will be processing a safety "cocoon" around the vehicle – essential for autonomous driving.
Engineering samples of EyeQ4 are expected to be available by Q4 2015. First test hardware with the full suite of applications including active safety suite of customer functions, environmental modeling (for each of the 8 cameras), path planning for hands-free driving and fusion with sensors, is expected to be available in Q2 2016.
Thanks to MM for the link!
"Supporting a camera centric approach for autonomous driving is essential as the camera provides the richest source of information at the lowest cost package. To reach affordable high-end functionality for autonomous driving requires a computing infrastructure capable of processing many cameras simultaneously while extracting from each camera high-level meaning such as location of multiple types of objects, lanes and drivable path information," said Amnon Shashua, cofounder, CTO and Chairman of Mobileye. "The EyeQ4 continues a legacy that began in 2004 with EyeQ1 where we leveraged deep understanding of computer vision processing to come up with highly optimized architectures to support extremely intensive computations at automotive compliant power consumption of 2-3 Watts."
The EyeQ4 provides "super-computer" capabilities of more than 2.5 teraflops within a low-power (approximately 3W) automotive grade system-on-chip.
EyeQ4-based ADAS uses computer vision algorithms like Deep Layered Networks and Graphical Models while processing information from 8 cameras simultaneously at 36fps. The EyeQ4 will accept multiple camera inputs from a trifocal front-sensing camera configuration, surround-view-systems of four wide field of view cameras, a long range rear-facing camera and information from multiple radars and scanning beam lasers scanners. Taken together, the EyeQ4 will be processing a safety "cocoon" around the vehicle – essential for autonomous driving.
Engineering samples of EyeQ4 are expected to be available by Q4 2015. First test hardware with the full suite of applications including active safety suite of customer functions, environmental modeling (for each of the 8 cameras), path planning for hands-free driving and fusion with sensors, is expected to be available in Q2 2016.
Thanks to MM for the link!
Selasa, 03 Maret 2015
TowerJazz Makes IR Sensors for Intel RealSense Cameras
GlobeNewswire: TowerJazz begins mass production of an IR sensor used by Intel in one of its new 3D sensing solutions. Intel chose TowerJazz's 0.11um IS11 process, due to its pixel performance in NIR combined with high speed, high QE and high optical resolution. The unique pixel developed by TowerJazz for Intel is a 3.5um global shutter very fast pixel that allows high QE in NIR, specifically at the scanning laser wavelength with high sensor resolution.
"Partnering with TowerJazz was a part of our success in producing our advanced image sensor for 3D imaging and was a natural choice as they were able to offer the required technical specifications and performance for this breakthrough technology," said Sagi Ben Moshe, Director Depth Camera Engineering, Intel Corporation.
"This collaboration between Intel and TowerJazz was a natural fit. Intel's leadership in this market, combined with our leading technology that provides outstanding pixel performance for near IR 3D imaging, along with the proximity of our Israel fab with Intel Israel, the group developing this technology, was an ideal alignment," said Russell Ellwanger, CEO, TowerJazz. "We are very excited to partner with Intel to produce lifestyle changing technology that will revolutionize the way we interact with devices in both our professional and personal lives. We highly value our business relationship with Intel and look forward to further collaboration on their sensing technology."
"It is truly amazing and thrilling to see our lengthy experience in the imaging field and our own CMOS image sensor technology developed in-house, combined with all of the R&D work we have undertaken for many years come to fruition in such a groundbreaking way," said Dr. Avi Strum, VP and GM, CMOS Image Sensor Business Unit, TowerJazz. "Intel sensing solutions will bring consumers new experiences and will change the way people capture and share 3D images. We are very proud of our work with Intel and our ability to assist them in bringing cutting-edge technologies to market quickly and in high volume."
TowerJazz stock jumped by 7.5% after the announcement:
"Partnering with TowerJazz was a part of our success in producing our advanced image sensor for 3D imaging and was a natural choice as they were able to offer the required technical specifications and performance for this breakthrough technology," said Sagi Ben Moshe, Director Depth Camera Engineering, Intel Corporation.
"This collaboration between Intel and TowerJazz was a natural fit. Intel's leadership in this market, combined with our leading technology that provides outstanding pixel performance for near IR 3D imaging, along with the proximity of our Israel fab with Intel Israel, the group developing this technology, was an ideal alignment," said Russell Ellwanger, CEO, TowerJazz. "We are very excited to partner with Intel to produce lifestyle changing technology that will revolutionize the way we interact with devices in both our professional and personal lives. We highly value our business relationship with Intel and look forward to further collaboration on their sensing technology."
"It is truly amazing and thrilling to see our lengthy experience in the imaging field and our own CMOS image sensor technology developed in-house, combined with all of the R&D work we have undertaken for many years come to fruition in such a groundbreaking way," said Dr. Avi Strum, VP and GM, CMOS Image Sensor Business Unit, TowerJazz. "Intel sensing solutions will bring consumers new experiences and will change the way people capture and share 3D images. We are very proud of our work with Intel and our ability to assist them in bringing cutting-edge technologies to market quickly and in high volume."
TowerJazz stock jumped by 7.5% after the announcement:
Altera, Cadence Demos
Altera shows its low power stereo vision FPGA solution in this Youtube video:
Cadence Youtube video presents is video/imaging DSP IP platform, says it consumes 10x less power:
Cadence Youtube video presents is video/imaging DSP IP platform, says it consumes 10x less power:
Samsung Announces 8MP RWB ISOCELL Sensor
Business Wire, Samsung Tomorrow: Samsung’s 8MP ISOCELL RWB CMOS image sensor, the S5K4H5YB, is aimed to front-facing mobile cameras. By combining ISOCELL technology with the newly developed RWB (Red-White-Blue) color filter, the new image sensor enhances light sensitivity and color fidelity, allowing an improvement of over 3 dB in SNR in low light settings. In addition, unlike other types of color pattern configurations, the high similarity between the RWB and RGB patterns eliminates the need for an RGB convertor in a RWB filter, which prevents unnecessary color deviation.
Physical barriers between each ISOCELL pixel allow 30% decrease in crosstalk and 30% increase in full-well capacity when compared to "conventional" BSI sensors, possibly, S5K4H5YC.
Samples of the 8 MP RWB ISOCELL image sensor are available for customers and mass production of the product is scheduled for Q2 2015. Incidentaly, the S5K4H5YB has already been announced a year and a half ago. By that time, Samsung said "The S5K4H5YB is currently sampling to customers with mass production scheduled for Q4 2013." However, in the earlier announcement, Samsung has not mentioned RWB CFA.
Samsung S5K4H5YB product page says that it's already in production:
Physical barriers between each ISOCELL pixel allow 30% decrease in crosstalk and 30% increase in full-well capacity when compared to "conventional" BSI sensors, possibly, S5K4H5YC.
Samples of the 8 MP RWB ISOCELL image sensor are available for customers and mass production of the product is scheduled for Q2 2015. Incidentaly, the S5K4H5YB has already been announced a year and a half ago. By that time, Samsung said "The S5K4H5YB is currently sampling to customers with mass production scheduled for Q4 2013." However, in the earlier announcement, Samsung has not mentioned RWB CFA.
Samsung S5K4H5YB product page says that it's already in production:
Senin, 02 Maret 2015
Omnivision Shrinks Pixel to 1um
PR Newswire: OmniVision announces the OV16880, a 16MP sensor built on OmniVision's PureCel-S stacked die technology. The 1/3-inch OV16880 introduces a new 1um (!) pixel technology, as well as advanced features such as phase detection autofocus (PDAF).
"Industry observers expect the 1/3-inch image sensor market for 13-megapixel to 16-megapixel resolution segments to double within the next two years, driven mostly by the proliferation of higher resolution mainstream smartphones and tablets," said Kalai Chinnaveerappan, senior product marketing manager at OmniVision. "The OV16880 is the industry's first 1/3-inch 16-megapixel image sensor, putting it in the forefront of this high-growth market segment. The sensor enables slim devices to transition from a 13-megapixel to 16-megapixel camera while maintaining excellent image quality and pixel performance."
The OV16880 PureCel-S stacked die pixel array features buried color filter array (BCFA) technology, which reduces pixel crosstalk and improves SNR. The OV16880 captures 16MP images at 30fps, allowing burst photography and zero shutter lag at full resolution. Additionally, the sensor is capable of capturing 4K video at 30fps, 1080p video at 90fps, and 720p video at 120fps. The OV16880 also supports interlaced high dynamic range (iHDR) timing functionality. The sensor fits into a 8.5 mm x 8.5 mm module with a z-height of less than 5 mm.
The sensor is currently available for sampling, and is expected to enter volume production in Q3 2015.
"Industry observers expect the 1/3-inch image sensor market for 13-megapixel to 16-megapixel resolution segments to double within the next two years, driven mostly by the proliferation of higher resolution mainstream smartphones and tablets," said Kalai Chinnaveerappan, senior product marketing manager at OmniVision. "The OV16880 is the industry's first 1/3-inch 16-megapixel image sensor, putting it in the forefront of this high-growth market segment. The sensor enables slim devices to transition from a 13-megapixel to 16-megapixel camera while maintaining excellent image quality and pixel performance."
The OV16880 PureCel-S stacked die pixel array features buried color filter array (BCFA) technology, which reduces pixel crosstalk and improves SNR. The OV16880 captures 16MP images at 30fps, allowing burst photography and zero shutter lag at full resolution. Additionally, the sensor is capable of capturing 4K video at 30fps, 1080p video at 90fps, and 720p video at 120fps. The OV16880 also supports interlaced high dynamic range (iHDR) timing functionality. The sensor fits into a 8.5 mm x 8.5 mm module with a z-height of less than 5 mm.
The sensor is currently available for sampling, and is expected to enter volume production in Q3 2015.
Toshiba Announces 13MP BSI Sensor
Business Wire: Toshiba announces the T4KB3, a 13MP BSI CMOS image sensor with the optical format of 1/3.07-inch for smartphones and tablets. The new design methodology helps to reduce the power consumption of the new 13MP sensor to 53% of Toshiba’s sensor currently in mass production, to 200mW or less at 30fps. The 1.12um pixel-based T4KB3 is also said to be world’s smallest 13MP sensor. Toshiba “Bright Mode” technology boosts image brightness up to four times in Full-HD video capture at 120fps equivalent. The sample shipments start today.
Freescale Presents Automotive Vision Processor
Business Wire: Freescale introduces the S32V vision microprocessor, said to be the first automotive vision SoC with the requisite reliability, safety and security measures to automate and ‘co-pilot’ a self-aware car. “Many automotive vision systems available today are based on consumer-oriented silicon solutions originally designed to enhance gaming graphics or run smartphone apps. But in a new era where cars will serve as trusted co-pilots, utilizing consumer-oriented silicon is fundamentally unwise,” said Bob Conrad, SVP and GM of Automotive MCUs for Freescale. “Relying on anything less than automotive-grade silicon to take control of a vehicle and make critical driving decisions is simply not acceptable – not for me, not for my family and not for my customers.”
The S32V vision microprocessor integrates the 2nd generation CogniVue APEX-642 core image processing technology, as well as four ARM Cortex-A53 cores. Full market availability for the S32V is expected in July 2015.
The S32V vision microprocessor integrates the 2nd generation CogniVue APEX-642 core image processing technology, as well as four ARM Cortex-A53 cores. Full market availability for the S32V is expected in July 2015.
Mentor Graphics Buys Tanner
Semiconductor Engineering reports that Mentor Graphics has purchased Tanner EDA for an undisclosed sum. Tanner CAD is popular among low budget image sensor design houses.
Minggu, 01 Maret 2015
Toshiba Announces ADAS Image Processor
Toshiba announces the TMPV7608XBG, an image recognition processor that provides recognition and detection of vehicles and pedestrians at night. The new processor is capable of 1.9 Tera operations per second (TOPS) and integrates new feature descriptors that make use of color-based image information.
The new processor is said to realize a nighttime pedestrian detection as reliable as a daytime detection available with conventional vision systems. Toshiba’s original Enhanced CoHOG (Co-occurrence Histograms of Oriented Gradients) accelerators combine luminance-based CoHOG feature descriptors with color-based feature descriptors obtained using a newly developed technique. This enhancement leads to an improvement in the recognition accuracy, especially at nighttime and at scenes with less luminance differences between objects and the background:
The TMPV7608XBG incorporates a Structure from Motion (SfM) accelerator that allows detection of general stationary obstacles such as fallen objects and landslides. The SfM accelerator provides three-dimensional (3D) estimates of the distance to, and the height and width of, the stationary obstacles, based on a sequence of images from a monocular camera. This accelerator makes it possible to detect any stationary obstacles without a learning curve, as well as moving objects (using motion analysis) and a particular class of objects such as pedestrians and vehicles (using pattern recognition).
The TMPV7608XBG is able to simultaneously perform Traffic Light Recognition (TLR), Traffic Sign Recognition (TSR), Lane Departure Warning (LDW) and Lane Keeping Assist (LKA), Vehicle and Pedestrian Collision Warning and Collision Avoidance, High-Beam Assistance, and General Obstacle Collision Warning. The devices interfaces with up to 8 cameras, while consuming 3.37W of power:
The sample shipment started in January 2015.
The new processor is said to realize a nighttime pedestrian detection as reliable as a daytime detection available with conventional vision systems. Toshiba’s original Enhanced CoHOG (Co-occurrence Histograms of Oriented Gradients) accelerators combine luminance-based CoHOG feature descriptors with color-based feature descriptors obtained using a newly developed technique. This enhancement leads to an improvement in the recognition accuracy, especially at nighttime and at scenes with less luminance differences between objects and the background:
The TMPV7608XBG incorporates a Structure from Motion (SfM) accelerator that allows detection of general stationary obstacles such as fallen objects and landslides. The SfM accelerator provides three-dimensional (3D) estimates of the distance to, and the height and width of, the stationary obstacles, based on a sequence of images from a monocular camera. This accelerator makes it possible to detect any stationary obstacles without a learning curve, as well as moving objects (using motion analysis) and a particular class of objects such as pedestrians and vehicles (using pattern recognition).
The TMPV7608XBG is able to simultaneously perform Traffic Light Recognition (TLR), Traffic Sign Recognition (TSR), Lane Departure Warning (LDW) and Lane Keeping Assist (LKA), Vehicle and Pedestrian Collision Warning and Collision Avoidance, High-Beam Assistance, and General Obstacle Collision Warning. The devices interfaces with up to 8 cameras, while consuming 3.37W of power:
The sample shipment started in January 2015.
Langganan:
Postingan (Atom)