Have you seen Audi's extraordinary automatic parking function? In the absence of a driver, the car can automatically find the parking space and park it in; or whether you use the Kinect controller to play Xbox 360 games, or just take a bite of fresh fruit you buy from a local fruit shop. If so, then you can think of yourself as a witness to the era of the Smarter Vision system.

From the most sophisticated electronic systems to the ordinary Apple, all kinds of products are affected by Smarter video technology. Although the use of Smarter Vision is amazing enough today, experts say we haven't seen anything. It is predicted that in the past 10 years, most electronic systems, from automotive to factory automation, medical, surveillance, consumer, aerospace and military products, will include more powerful and superior Smarter Vision technology, which will greatly enrich people's Life, and even save lives.

The Smarter Vision system will quickly become popular, and as they become more sophisticated in the coming years, we are likely to take the auto-driving car on the highway network. Medical devices such as IntuiTIve Surgical's stunning robot-assisted surgical system will be further developed, allowing surgeons to perform surgical procedures remotely. Television and remote monitoring systems make people feel like they are in an unprecedented level of interoperability, while the content on the screens of cinemas, homes and stores will cater to the interests and emotions of each different consumer.

The Xilinx All Programmable Smarter Vision solution is leading this revolution. The ZynqTM-7000 All Programmable SoC is the industry's first device to combine ARM dual-core CortexTM-A9 MPCORETM, programmable logic and key peripherals on a single device. Based on this, Xilinx has launched a supporting infrastructure (tools and SmartCORE IP portfolio) that will play a vital role in developing and accelerating the launch of these outstanding innovative products. The secondary infrastructure includes VivadoTM HLS (High Level Synthesis), the latest IP Integrator tools, the OpenCV (Computer Vision) library, SmartCORETM IP, and a dedicated development kit.

Steve Glaser, senior vice president of corporate strategy and marketing at Xilinx, said: "With the Xilinx All Programmable Smarter Vision solution, we will help customers take the lead in launching the next-generation Smarter Vision system. Over the past 10 years, customers have leveraged us. FPGAs speed up the lack of speed in their system processors. For the Zynq-7000 All Programmable SoC, the processor and FPGA logic are on the same chip, which means that developers now have one for The ideal chip platform for Smarter Vision applications. We have now developed a robust and reliable development environment consisting of Vivado HLS, the latest IP Integrator tools, OpenCV library, SmartCORE IP and development kits to further complement the Zynq-7000 All Programmable SoC. With these Smarter Vision technologies, our customers can immediately launch their new designs to deliver innovative products with higher efficiency and higher system performance, lower system power and bill of materials costs at a faster rate, thereby increasing profitability. Enrich people's lives and save lives ."

From dumb cameras to Smarter Vision

The root of the Smarter Vision system is embedded vision. If you don't know much about embedded vision, let's take a look at the introduction of this technology and its evolution.

According to the definition of the rapidly growing industry organization "Embedded Vision Alliance", embedded vision combines two technologies: an embedded system (associated with any electronic system of a computer using a processor) and computer vision (sometimes called a machine) Visual).

Jeff Bier, CEO of BDTI, founder and consultancy of the Embedded Vision Alliance, said that embedded vision technology has had a significant impact on several industries because the technology has evolved far beyond the previous translation/tilting by motor drives. /The era of analog camera systems with motorized pan-TIlt-zoom function. Bier said: "We have lived in the digital age for a while, seeing that embedded vision has evolved from an early digital system that is good at compressing, storing, or enhancing the look and feel of a camera to a Smarter embedded vision system that now knows what to shoot." In addition, advanced embedded vision systems or Smarter vision systems not only enhance and analyze images, but also trigger actions based on those results. As a result, the amount of processing and computing power, as well as the complexity of the algorithm, are significantly increased. The rapid development of the surveillance market is one of the best examples of this remarkable evolution.

Twenty years ago, surveillance system vendors competed to provide the best lens enhanced by mechanical systems to perform pan/tilt/zoom functions that enable clearer, wider field of view. These systems consist essentially of analog video cameras, coaxial cables for connection, analog monitors, and video recording devices that are monitored by security personnel. The clarity, reliability and effectiveness of these systems are determined by the quality of the optical components and lenses and the due diligence of the security personnel who monitor the content of the camera.

The advent of embedded vision technology has enabled surveillance equipment companies to use lower cost cameras based on digital technology. This digital processing capability delivers superior functionality for its systems, surpassing analog and lens-based security systems in terms of performance and is less expensive.

The fisheye lens and the embedded processing system using various vision-specific algorithms greatly enhance the image quality generated by the camera. These techniques are calibrated for lighting conditions to improve focus, enhance color and digital zoom viewing areas, and eliminate the need for mechanical motor control to perform pan/tilt/zoom, further enhancing system reliability. Enterprises using digital signal processing can provide surveillance systems with video resolutions up to and beyond 1080p. In fact, in terms of UAVs and military satellites, embedded vision has been able to achieve unprecedented high resolution. If you capture an image at this resolution, you need to process a large number of pixels, and enhancing and manipulating these images requires even higher processing power.

But manipulating the image through digital signal processing and enhancing its clarity is only the beginning. With much more advanced pixel processing capabilities, surveillance system manufacturers are beginning to create more sophisticated embedded vision systems that can perform analysis functions in real-time on high-quality images captured by their digital systems. Every year, vision system designers introduce a series of more powerful advanced algorithms for creating more dynamic analysis functions. The earliest types of these embedded vision systems can only detect specific colors, shapes, and movements. This feature quickly evolved into an algorithm that can detect objects that cross the virtual fence in the field of view of the camera; determine if the object in the image is a person; and even identify a specific person after linking to the database.

The state-of-the-art monitoring system provides an analytic function that tracks the field of view of the monitored individuals across the security network, even after they leave the camera's field of view and enters the blind spot, and then enters the field of view of another camera on the surveillance network. . Visual designers have designed some of these systems to detect unusual or suspicious movements. Mark TImmons, systems architect for the Xilinx Industrial, Scientific and Medical (ISM) business unit, said: "Analytical capabilities are the biggest trend in today's surveillance market. It can overcome human error and even replace meticulous manual observation and decision making. Imagine, In crowded environments such as train stations and sports venues, monitoring is extremely difficult, so if there is an analysis function that can detect overcrowded dangerous situations or individuals who exhibit dangerous behavior or excessive action, it will have a distinct advantage. ”

To further enhance this analytics and increase the effectiveness of such systems, surveillance and many other markets using Smarter Vision technology are increasingly adopting a "converged" architecture that combines cameras and thermal imaging, radar, sonar and LIDAR ( Other sensing technologies such as light/laser detection and ranging are combined. This allows Smarter visual designers to further enhance the final image, enabling night vision, detecting thermal/thermal images, or picking up objects that cannot be captured or seen by the camera. This feature significantly reduces false detection for more accurate analysis. There is no doubt that technology convergence and subsequent analysis of data collected by fusion technologies will lead to greater complexity and require more powerful analytical processing capabilities.

Timmons mentioned that another big trend in this market is to perform all these types of complex analysis systems that are located at the "edge" of the surveillance system network, ie in each camera, rather than each camera's own data. Transfer to the central mainframe system, and then the mainframe performs a more accurate analysis based on the multi-path fed data. The localization of the analysis function adds flexibility to the overall security system, enabling each point in the system to perform detections more quickly and accurately, so that if the camera can actually detect a real threat, it can alert the operator more quickly. .

Analytical function localization means that each unit requires not only more powerful processing functions to enhance and analyze the images captured by the camera, but also must be compact enough to be integrated into highly integrated electronic systems. And because each unit must be able to reliably communicate with the rest of the network, the camera must also integrate electronic communication capabilities to further increase computational complexity. These monitoring units are gradually becoming part of larger surveillance systems through wireless network connections; and these monitoring systems will continue to become part of larger enterprise networks and even larger global networks, just as the US military's global information network (See the cover of the 69th issue of the Xcell journal: http://china.xilinx.com/china/archives/xcell/Xcell69.pdf).

This type of high complexity is expected to appear in areas such as surveillance, and is also being used in all aspects of the military and defense markets, from infantry helmets to military satellites networked with the Central Command. Perhaps even more amazing is that Smarter Vision technology is rapidly entering other areas to improve the quality of life and ensure life safety.

Network Cables, Category: CAT5E ,CAT6,CAT7,UTP and STP network.Our network Environmental certification.The length can be customized .Welcome to our Company inquiry order.

Networking Cable

Rj45 Network Cable,Networking Cable,Cat6 Utp Cable,Long Ethernet Cable

Dongguan Fangbei Electronic Co.,Ltd , https://www.connectorfb.com