CCD vs. CMOS http://www.teledynedalsa.com/corp/markets/CCD_vs_CMOS.aspx The technologies and the markets that use them continue to mature, but the comparison is still a lot like apples vs. oranges: they can both be good for you. Teledyne DALSA offers both. CCD (charge coupled device) and CMOS (complementary metal oxide semiconductor) image sensors are two different technologies for capturing images digitally. Each has unique strengths and weaknesses giving advantages in different applications. Neither is categorically superior to the other, although vendors selling only one technology have usually claimed otherwise. In the last five years much has changed with both technologies, and many projections regarding the demise or ascendence of either have been proved false. The current situation and outlook for both technologies is vibrant, but a new framework exists for considering the relative strengths and opportunities of CCD and CMOS imagers. Both types of imagers convert light into electric charge and process it into electronic signals. In a CCD sensor, every pixel's charge is transferred through a very limited number of output nodes (often just one) to be converted to voltage, buffered, and sent off-chip as an analog signal. All of the pixel can be devoted to light capture, and the output's uniformity (a key factor in image quality) is high. In a CMOS sensor, each pixel has its own charge-to-voltage conversion, and the sensor often also includes amplifiers, noise-correction, and digitization circuits, so that the chip outputs digital bits. These other functions increase the design complexity and reduce the area available for light capture. With each pixel doing its own conversion, uniformity is lower. But the chip can be built to require less off-chip circuitry for basic operation. For more details on device architecture and operation, see our original " CCD vs. CMOS: Facts and Fiction " article and its 2005 update, " CMOS vs. CCD: Maturing Technologies, Maturing Markets ." CCDs and CMOS imagers were both invented in the late 1960s and 1970s (DALSA founder and CEO Dr. Savvas Chamberlain was a pioneer in developing both technologies). CCD became dominant, primarily because they gave far superior images with the fabrication technology available. CMOS image sensors required more uniformity and smaller features than silicon wafer foundries could deliver at the time. Not until the 1990s did lithography develop to the point that designers could begin making a case for CMOS imagers again. Renewed interest in CMOS was based on expectations of lowered power consumption, camera-on-a-chip integration, and lowered fabrication costs from the reuse of mainstream logic and memory device fabrication. While all of these benefits are possible in theory, achieving them in practice while simultaneously delivering high image quality has taken far more time, money, and process adaptation than original projections suggested (see " CMOS Development's Winding Path " below). Both CCDs and CMOS imagers can offer excellent imaging performance when designed properly. CCDs have traditionally provided the performance benchmarks in the photographic, scientific, and industrial applications that demand the highest image quality (as measured in quantum efficiency and noise) at the expense of system size. CMOS imagers offer more integration (more functions on the chip), lower power dissipation (at the chip level), and the possibility of smaller system size, but they have often required tradeoffs between image quality and device cost. Today there is no clear line dividing the types of applications each can serve. CMOS designers have devoted intense effort to achieving high image quality, while CCD designers have lowered their power requirements and pixel sizes. As a result, you can find CCDs in low-cost low-power cellphone cameras and CMOS sensors in high-performance professional and industrial cameras, directly contradicting the early stereotypes. It is worth noting that the producers succeeding with "crossovers" have almost always been established players with years of deep experience in both technologies. Costs are similar at the chip level. Early CMOS proponents claimed CMOS imagers would be much cheaper because they could be produced on the same high-volume wafer processing lines as mainstream logic or memory chips. This has not been the case. The accommodations required for good imaging perfomance have required CMOS designers to iteratively develop specialized, optimized, lower-volume mixed-signal fabrication processes--very much like those used for CCDs. Proving out these processes at successively smaller lithography nodes (0.35um, 0.25um, 0.18um...) has been slow and expensive; those with a captive foundry have an advantage because they can better maintain the attention of the process engineers. CMOS cameras may require fewer components and less power, but they still generally require companion chips to optimize image quality, increasing cost and reducing the advantage they gain from lower power consumption. CCD devices are less complex than CMOS, so they cost less to design. CCD fabrication processes also tend to be more mature and optimized; in general, it will cost less (in both design and fabrication) to yield a CCD than a CMOS imager for a specific high-performance application. However, wafer size can be a dominating influence on device cost; the larger the wafer, the more devices it can yield, and the lower the cost per device. 200mm is fairly common for third-party CMOS foundries while third-party CCD foundries tend to offer 150mm. Captive foundries use 150mm, 200mm, and 300mm production for both CCD and CMOS. The larger issue around pricing is sustainability. Since many CMOS start-ups pursued high-volume, commodity applications from a small base of business, they priced below costs to win business. For some, the risk paid off and their volumes provided enough margin for viability. But others had to raise their prices, while still others went out of business entirely. High-risk startups can be interesting to venture capitalists, but imager customers require long-term stability and support. While cost advantages have been difficult to realize and on-chip integration has been slow to arrive, speed is one area where CMOS imagers can demonstrate considerable strength because of the relative ease of parallel output structures. This gives them great potential in industrial applications. CCDs and CMOS will remain complementary. The choice continues to depend on the application and the vendor more than the technology. Teledyne DALSA's approach is "technology-neutral": we are one of the few vendors able to offer real solutions with both CCDs and CMOS. Feature and Performance Comparison Feature CCD CMOS Signal out of pixel Electron packet Voltage Signal out of chip Voltage (analog) Bits (digital) Signal out of camera Bits (digital) Bits (digital) Fill factor High Moderate Amplifier mismatch N/A Moderate System Noise Low Moderate System Complexity High Low Sensor Complexity Low High Camera components Sensor + multiple support chips + lens Sensor + lens possible, but additional support chips common Relative RD cost Lower Higher Relative system cost Depends on Application Depends on Application Performance CCD CMOS Responsivity Moderate Slightly better Dynamic Range High Moderate Uniformity High Low to Moderate Uniform Shuttering Fast, common Poor Uniformity High Low to Moderate Speed Moderate to High Higher Windowing Limited Extensive Antiblooming High to none High Biasing and Clocking Multiple, higher voltage Single, low-voltage CMOS Development's Winding Path Initial Prediction for CMOS Twist Outcome Equivalence to CCD in imaging performance Required much greater process adaptation and deeper submicron lithography than initially thought High performance available in CMOS, but with higher development cost than CCD On-chip circuit integration Longer development cycles, increased cost, tradeoffs with noise, flexibility during operation Greater integration in CMOS, but companion chips still required for both CMOS and CCD Reduced power consumption Steady improvement in CCDs Advantage for CMOS, but margin diminished Reduced imaging subsystem size Optics, companion chips and packaging are often the dominant factors in imaging subsystem size CCDs and CMOS comparable Economies of scale from using mainstream logic and memory foundries Extensive process development and optimization required CMOS imagers use legacy production lines with highly adapted processes akin to CCD fabrication
Charge-coupled device From Wikipedia, the free encyclopedia (http://en.wikipedia.org/wiki/EMCCD) A specially developed CCD used for ultraviolet imaging in a wire bonded package. A charge-coupled device ( CCD ) is a device for the movement of electrical charge, usually from within the device to an area where the charge can be manipulated, for example conversion into a digital value. This is achieved by "shifting" the signals between stages within the device one at a time. CCDs move charge between capacitive bins in the device, with the shift allowing for the transfer of charge between bins. The CCD is a major technology for digital imaging . In a CCD image sensor , pixels are represented by p-doped MOS capacitors. These capacitors are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface; the CCD is then used to read out these charges. Although CCDs are not the only technology to allow for light detection, CCD image sensors are widely used in professional, medical, and scientific applications where high-quality image data is required. In applications where a somewhat lower quality can be tolerated, such as webcams , cheaper active pixel sensors are generally used. Contents 1 History 2 Basics of operation 3 Detailed physics of operation 3.1 Charge generation 3.2 Design and manufacturing 4 Architecture 5 Use in astronomy 6 Color cameras 6.1 Sensor sizes 7 Electron-multiplying CCD 8 Frame transfer CCD 9 Intensified charge-coupled device 10 Blooming 11 See also 12 References 13 External links History George E. Smith and Willard Boyle, 2009 The charge-coupled device was invented in 1969 at ATT Bell Labs by Willard Boyle and George E. Smith . The lab was working on semiconductor bubble memory when Boyle and Smith conceived of the design of what they termed, in their notebook, "Charge 'Bubble' Devices". A description of how the device could be used as a shift register and as a linear and area imaging devices was described in this first entry. The essence of the design was the ability to transfer charge along the surface of a semiconductor from one storage capacitor to the next. The concept was similar in principle to the bucket-brigade device (BBD), which was developed at Philips Research Labs during the late 1960s. The initial paper describing the concept listed possible uses as a memory, a delay line, and an imaging device. The first experimental device demonstrating the principle was a row of closely spaced metal squares on an oxidized silicon surface electrically accessed by wire bonds. The first working CCD made with integrated circuit technology was a simple 8-bit shift register. This device had input and output circuits and was used to demonstrate its use as a shift register and as a crude eight pixel linear imaging device. Development of the device progressed at a rapid rate. By 1971, Bell researchers Michael F. Tompsett et al. were able to capture images with simple linear devices. Several companies, including Fairchild Semiconductor , RCA and Texas Instruments , picked up on the invention and began development programs. Fairchild's effort, led by ex-Bell researcher Gil Amelio , was the first with commercial devices, and by 1974 had a linear 500-element device and a 2-D 100 x 100 pixel device. Steven Sasson , an electrical engineer working for Kodak , invented the first digital still camera using a Fairchild 100 x 100 CCD in 1975. The first KH-11 KENNAN reconnaissance satellite equipped with charge-coupled device array ( 800 x 800 pixels) technology for imaging was launched in December 1976. Under the leadership of Kazuo Iwama , Sony also started a big development effort on CCDs involving a significant investment. Eventually, Sony managed to mass produce CCDs for their camcorders . Before this happened, Iwama died in August 1982; subsequently, a CCD chip was placed on his tombstone to acknowledge his contribution. In January 2006, Boyle and Smith were awarded the National Academy of Engineering Charles Stark Draper Prize , and in 2009 they were awarded the Nobel Prize for Physics , for their work on the CCD. Basics of operation The charge packets (electrons, blue) are collected in potential wells (yellow) created by applying positive voltage at the gate electrodes (G). Applying positive voltage to the gate electrode in the correct sequence transfers the charge packets. In a CCD for capturing images, there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out of a shift register (the CCD, properly speaking). An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, while a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift register). The last capacitor in the array dumps its charge into a charge amplifier , which converts the charge into a voltage . By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages. In a digital device, these voltages are then sampled, digitized, and usually stored in memory; in an analog device (such as an analog video camera), they are processed into a continuous analog signal (e.g. by feeding the output of the charge amplifier into a low-pass filter) which is then processed and fed out to other circuits for transmission, recording, or other processing. "One-dimensional" CCD image sensor from a fax machine . Detailed physics of operation Charge generation Before the MOS capacitors are exposed to light, they are biased into the depletion region; in n-channel CCDs, the silicon under the bias gate is slightly p -doped or intrinsic. The gate is then biased at a positive potential, above the threshold for strong inversion, which will eventually result in the creation of a n channel below the gate as in a MOSFET . However, it takes time to reach this thermal equilibrium: up to hours in high-end scientific cameras cooled at low temperature. Initially after biasing, the holes are pushed far into the substrate, and no mobile electrons are at or near the surface; the CCD thus operates in a non-equilibrium state called deep depletion. Then, when electron–hole pairs are generated in the depletion region, they are separated by the electric field, the electrons move toward the surface, and the holes move toward the substrate. Four pair-generation processes can be identified: photo-generation (up to 95% of quantum efficiency ), generation in the depletion region, generation at the surface, and generation in the neutral bulk. The last three processes are known as dark-current generation, and add noise to the image; they can limit the total usable integration time. The accumulation of electrons at or near the surface can proceed either until image integration is over and charge begins to be transferred, or thermal equilibrium is reached. In this case, the well is said to be full (corresponding typically to about 10 5 electrons per pixel ). Design and manufacturing The photoactive region of a CCD is, generally, an epitaxial layer of silicon . It is lightly p doped (usually with boron ) and is grown upon a substrate material, often p++. In buried-channel devices, the type of design utilized in most modern CCDs, certain areas of the surface of the silicon are ion implanted with phosphorus , giving them an n-doped designation. This region defines the channel in which the photogenerated charge packets will travel. Simon Sze details the advantages of a buried-channel device: This thin layer (= 0.2–0.3 nm) is fully depleted and the accumulated photogenerated charge is kept away from the surface. This structure has the advantages of higher transfer efficiency and lower dark current, from reduced surface recombination. The penalty is smaller charge capacity, by a factor of 2–3 compared to the surface-channel CCD. The gate oxide, i.e. the capacitor dielectric , is grown on top of the epitaxial layer and substrate. Later in the process, polysilicon gates are deposited by chemical vapor deposition , patterned with photolithography , and etched in such a way that the separately phased gates lie perpendicular to the channels. The channels are further defined by utilization of the LOCOS process to produce the channel stop region. Channel stops are thermally grown oxides that serve to isolate the charge packets in one column from those in another. These channel stops are produced before the polysilicon gates are, as the LOCOS process utilizes a high-temperature step that would destroy the gate material. The channel stops are parallel to, and exclusive of, the channel, or "charge carrying", regions. Channel stops often have a p+ doped region underlying them, providing a further barrier to the electrons in the charge packets (this discussion of the physics of CCD devices assumes an electron transfer device, though hole transfer is possible). The clocking of the gates, alternately high and low, will forward and reverse bias the diode that is provided by the buried channel (n-doped) and the epitaxial layer (p-doped). This will cause the CCD to deplete, near the p-n junction and will collect and move the charge packets beneath the gates—and within the channels—of the device. CCD manufacturing and operation can be optimized for different uses. The above process describes a frame transfer CCD. While CCDs may be manufactured on a heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have been placed on an n-wafer. This second method, reportedly, reduces smear, dark current , and infrared and red response. This method of manufacture is used in the construction of interline-transfer devices. Another version of CCD is called a peristaltic CCD. In a peristaltic charge-coupled device, the charge-packet transfer operation is analogous to the peristaltic contraction and dilation of the digestive system . The peristaltic CCD has an additional implant that keeps the charge away from the silicon/ silicon dioxide interface and generates a large lateral electric field from one gate to the next. This provides an additional driving force to aid in transfer of the charge packets. Architecture The CCD image sensors can be implemented in several different architectures. The most common are full-frame, frame-transfer, and interline. The distinguishing characteristic of each of these architectures is their approach to the problem of shuttering. In a full-frame device, all of the image area is active, and there is no electronic shutter. A mechanical shutter must be added to this type of sensor or the image smears as the device is clocked or read out. With a frame-transfer CCD , half of the silicon area is covered by an opaque mask (typically aluminum). The image can be quickly transferred from the image area to the opaque area or storage region with acceptable smear of a few percent. That image can then be read out slowly from the storage region while a new image is integrating or exposing in the active area. Frame-transfer devices typically do not require a mechanical shutter and were a common architecture for early solid-state broadcast cameras. The downside to the frame-transfer architecture is that it requires twice the silicon real estate of an equivalent full-frame device; hence, it costs roughly twice as much. The interline architecture extends this concept one step further and masks every other column of the image sensor for storage. In this device, only one pixel shift has to occur to transfer from image area to storage area; thus, shutter times can be less than a microsecond and smear is essentially eliminated. The advantage is not free, however, as the imaging area is now covered by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this deleterious characteristic by adding microlenses on the surface of the device to direct light away from the opaque regions and on the active area. Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design. CCD from a 2.1 megapixel Argus digital camera. The choice of architecture comes down to one of utility. If the application cannot tolerate an expensive, failure-prone, power-intensive mechanical shutter, an interline device is the right choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those applications that require the best possible light collection and issues of money, power and time are less important, the full-frame device is the right choice. Astronomers tend to prefer full-frame devices. The frame-transfer falls in between and was a common choice before the fill-factor issue of interline devices was addressed. Today, frame-transfer is usually chosen when an interline architecture is not available, such as in a back-illuminated device. CCDs containing grids of pixels are used in digital cameras , optical scanners , and video cameras as light-sensing devices. They commonly respond to 70 percent of the incident light (meaning a quantum efficiency of about 70 percent) making them far more efficient than photographic film , which captures only about 2 percent of the incident light. CCD from a 2.1 megapixel Hewlett-Packard digital camera. Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography , night-vision devices, and zero lux (or near zero lux) video-recording/photography. For normal silicon-based detectors, the sensitivity is limited to 1.1μm. One other consequence of their sensitivity to infrared is that infrared from remote controls often appears on CCD-based digital cameras or camcorders if they do not have infrared blockers. Cooling reduces the array's dark current , improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths. Professional observatories often cool their detectors with liquid nitrogen to reduce the dark current, and therefore the thermal noise , to negligible levels. Use in astronomy Due to the high quantum efficiencies of CCDs, linearity of their outputs (one count for one photon of light), ease of use compared to photographic plates, and a variety of other reasons, CCDs were very rapidly adopted by astronomers for nearly all UV-to-infrared applications. Thermal noise and cosmic rays may alter the pixels in the CCD array. To counter such effects, astronomers take several exposures with the CCD shutter closed and opened. The average of images taken with the shutter closed is necessary to lower the random noise. Once developed, the dark frame average image is then subtracted from the open-shutter image to remove the dark current and other systematic defects ( dead pixels , hot pixels, etc.) in the CCD. The Hubble Space Telescope , in particular, has a highly developed series of steps (“data reduction pipeline”) to convert the raw CCD data to useful images. CCD cameras used in astrophotography often require sturdy mounts to cope with vibrations from wind and other sources, along with the tremendous weight of most imaging platforms. To take long exposures of galaxies and nebulae, many astronomers use a technique known as auto-guiding . Most autoguiders use a second CCD chip to monitor deviations during imaging. This chip can rapidly detect errors in tracking and command the mount motors to correct for them. Array of 30 CCDs used on Sloan Digital Sky Survey telescope imaging camera, an example of "drift-scanning." An interesting unusual astronomical application of CCDs, called drift-scanning , uses a CCD to make a fixed telescope behave like a tracking telescope and follow the motion of the sky. The charges in the CCD are transferred and read in a direction parallel to the motion of the sky, and at the same speed. In this way, the telescope can image a larger region of the sky than its normal field of view. The Sloan Digital Sky Survey is the most famous example of this, using the technique to produce the largest uniform survey of the sky yet accomplished. In addition to astronomy, CCDs are also used in laboratory analytical instrumentation such as monochromators , spectrometers , and N-slit laser interferometers . Color cameras A Bayer filter on a CCD CCD color sensor x80 microscope view of an RGGB Bayer filter on a 240 line Sony CCD PAL Camcorder CCD sensor Digital color cameras generally use a Bayer mask over the CCD. Each square of four pixels has one filtered red, one blue, and two green (the human eye is more sensitive to green than either red or blue). The result of this is that luminance information is collected at every pixel, but the color resolution is lower than the luminance resolution. Better color separation can be reached by three-CCD devices ( 3CCD ) and a dichroic beam splitter prism , that splits the image into red , green and blue components. Each of the three CCDs is arranged to respond to a particular color. Most professional video camcorders, and some semi-professional camcorders, use this technique. Another advantage of 3CCD over a Bayer mask device is higher quantum efficiency (and therefore higher light sensitivity for a given aperture size). This is because in a 3CCD device most of the light entering the aperture is captured by a sensor, while a Bayer mask absorbs a high proportion (about 2/3) of the light falling on each CCD pixel. For still scenes, for instance in microscopy, the resolution of a Bayer mask device can be enhanced by microscanning technology. During the process of color co-site sampling , several frames of the scene are produced. Between acquisitions, the sensor is moved in pixel dimensions, so that each point in the visual field is acquired consecutively by elements of the mask that are sensitive to the red, green and blue components of its color. Eventually every pixel in the image has been scanned at least once in each color and the resolution of the three channels become equivalent (the resolutions of red and blue channels are quadrupled while the green channel is doubled). Sensor sizes Main article: Image sensor format Sensors (CCD / CMOS) come in various sizes, or image sensor formats. These sizes are often referred to with an inch fraction designation such as 1/1.8″ or 2/3″ called the optical format . This measurement actually originates back in the 1950s and the time of Vidicon tubes . Electron-multiplying CCD Electrons are transferred serially through the gain stages making up the multiplication register of an EMCCD. The high voltages used in these serial transfers induce the creation of additional charge carriers through impact ionisation. There is a dispersion (variation) in the number of electrons output by the multiplication register for a given (fixed) number of input electrons (shown in the legend on the right). The probability distribution for the number of output electrons is plotted logarithmically on the vertical axis for a simulation of a multiplication register. Also shown are results from the empirical fit equation shown on this page. An electron-multiplying CCD (EMCCD, also known as an L3Vision CCD, L3CCD or Impactron CCD) is a charge-coupled device in which a gain register is placed between the shift register and the output amplifier. The gain register is split up into a large number of stages. In each stage, the electrons are multiplied by impact ionization in a similar way to an avalanche diode . The gain probability at every stage of the register is small ( P 2%), but as the number of elements is large (N 500), the overall gain can be very high ( ), with single input electrons giving many thousands of output electrons. Reading a signal from a CCD gives a noise background, typically a few electrons. In an EMCCD, this noise is superimposed on many thousands of electrons rather than a single electron; the devices' primary advantage is thus their negligible readout noise. EMCCDs show a similar sensitivity to Intensified CCDs (ICCDs). However, as with ICCDs, the gain that is applied in the gain register is stochastic and the exact gain that has been applied to a pixel's charge is impossible to know. At high gains ( 30), this uncertainty has the same effect on the signal-to-noise ratio (SNR) as halving the quantum efficiency (QE) with respect to operation with a gain of unity. However, at very low light levels (where the quantum efficiency is most important), it can be assumed that a pixel either contains an electron — or not. This removes the noise associated with the stochastic multiplication at the risk of counting multiple electrons in the same pixel as a single electron. To avoid multiple counts in one pixel due to coincident photons in this mode of operation, high frame rates are primordial. The dispersion in the gain is shown in the graph on the right. For multiplication registers with many elements and large gains it is well modelled by the equation: if where P is the probability of getting n output electrons given m input electrons and a total mean multiplication register gain of g . Because of the lower costs and better resolution, EMCCDs are capable of replacing ICCDs in many applications. ICCDs still have the advantage that they can be gated very fast and thus are useful in applications like range-gated imaging . EMCCD cameras indispensably need a cooling system — using either thermoelectric cooling or liquid nitrogen — to cool the chip down to temperatures in the range of -65°C to -95°C. This cooling system unfortunately adds additional costs to the EMCCD imaging system and may yield condensation problems in the application. However, high-end EMCCD cameras are equipped with a permanent hermetic vacuum system confining the chip to avoid condensation issues. The low-light capabilities of EMCCDs primarily find use in astronomy and biomedical research, among other fields. In particular, their low noise at high readout speeds makes them very useful for a variety of astronomical applications involving low light sources and transient events such as lucky imaging of faint stars, high speed photon counting photometry, Fabry-Pérot spectroscopy and high-resolution spectroscopy. More recently, these types of CCDs have broken into the field of biomedical research in low-light applications including small animal imaging , single-molecule imaging , Raman spectroscopy , super resolution microscopy as well as a wide variety of modern fluorescence microscopy techniques thanks to greater SNR in low-light conditions in comparison with traditional CCDs and ICCDs. In terms of noise, commercial EMCCD cameras typically have clock-induced charge (CIC) and dark current (dependent on the extent of cooling) that together lead to an effective readout noise ranging from 0.01 to 1 electrons per pixel read. However, recent improvements in EMCCD technology have led to a new generation of cameras capable of producing significantly less CIC, higher charge transfer efficiency and an EM gain 5 times higher than what was previously available. These advances in low-light detection lead to an effective total background noise of 0.001 electrons per pixel read, a noise floor unmatched by any other low-light imaging device. Frame transfer CCD Vertical smear. A frame transfer CCD is a specialized CCD, often used in astronomy and some professional video cameras , designed for high exposure efficiency and correctness. The normal functioning of a CCD, astronomical or otherwise, can be divided into two phases: exposure and readout. During the first phase, the CCD passively collects incoming photons , storing electrons in its cells. After the exposure time is passed, the cells are read out one line at a time. During the readout phase, cells are shifted down the entire area of the CCD. While they are shifted, they continue to collect light. Thus, if the shifting is not fast enough, errors can result from light that falls on a cell holding charge during the transfer. These errors are referred to as "vertical smear" and cause a strong light source to create a vertical line above and below its exact location. In addition, the CCD cannot be used to collect light while it is being read out. Unfortunately, a faster shifting requires a faster readout, and a faster readout can introduce errors in the cell charge measurement, leading to a higher noise level. A frame transfer CCD solves both problems: it has a shielded, not light sensitive, area containing as many cells as the area exposed to light. Typically, this area is covered by a reflective material such as aluminium. When the exposure time is up, the cells are transferred very rapidly to the hidden area. Here, safe from any incoming light, cells can be read out at any speed one deems necessary to correctly measure the cells' charge. At the same time, the exposed part of the CCD is collecting light again, so no delay occurs between successive exposures. The disadvantage of such a CCD is the higher cost: the cell area is basically doubled, and more complex control electronics are needed. Intensified charge-coupled device Main article: Image intensifier An intensified charge-coupled device (ICCD) is a CCD that is optically connected to an image intensifier that is mounted in front of the CCD. An image intensifier includes three functional elements: a photocathode , a micro-channel plate (MCP) and a phosphor screen. These three elements are mounted one close behind the other in the mentioned sequence. The photons which are coming from the light source fall onto the photocathode, thereby generating photoelectrons. The photoelectrons are accelerated towards the MCP by an electrical control voltage, applied between photocathode and MCP. The electrons are multiplied inside of the MCP and thereafter accelerated towards the phosphor screen. The phosphor screen finally converts the multiplied electrons back to photons which are guided to the CCD by a fiber optic or a lens. An image intensifier inherently includes a shutter functionality: If the control voltage between the photocathode and the MCP is reversed, the emitted photoelectrons are not accelerated towards the MCP but return to the photocathode. Thus, no electrons are multiplied and emitted by the MCP, no electrons are going to the phosphor screen and no light is emitted from the image intensifier. In this case no light falls onto the CCD, which means that the shutter is closed. The process of reversing the control voltage at the photocathode is called gating and therefore ICCDs are also called gateable CCD cameras. Besides the extremely high sensitivity of ICCD cameras, which enable single photon detection, the gateability is one of the major advantages of the ICCD over the EMCCD cameras. The highest performing ICCD cameras enable shutter times as short as 200 picoseconds . ICCD cameras are in general somewhat higher in price than EMCCD cameras because they need the expensive image intensifier. On the other hand EMCCD cameras need a cooling system to cool the EMCCD chip down to temperatures around 170 K . This cooling system adds additional costs to the EMCCD camera and often yields heavy condensation problems in the application. ICCDs are used in night vision devices and in a large variety of scientific applications. Blooming When a CCD exposure is long enough, eventually the electrons that collect in the "bins" in the brightest part of the image will overflow the bin, resulting in blooming. The structure of the CCD allows the electrons to flow more easily in one direction than another, resulting in vertical streaking. Some anti-blooming features that can be built into a CCD reduce its sensitivity to light by using some of the pixel area for a drain structure. James M. Early developed a vertical anti-blooming drain that would not detract from the light collection area, so did not reduce light sensitivity. See also Photodiode CMOS sensor Rotating line camera Superconducting camera Wide dynamic range Hole Accumulation Diode (HAD) Andor Technology – Manufacturer of EMCCD cameras PI/Acton – Manufacturer of EMCCD cameras Stanford Computer Optics – Manufacturer of ICCD cameras Time delay and integration (TDI) References ^ See US3792322 and US3796927 ^ James R. Janesick (2001). Scientific charge-coupled devices . SPIE Press. p.4. ISBN 978-0-8194-3698-6 . ^ W. S. Boyle and G. E. Smith (April 1970). "Charge Coupled Semiconductor Devices". Bell Sys. Tech. J. 49 (4): 587–593. ^ G. F. Amelio, M. F. Tompsett, and G. E. Smith (April 1970). "Experimental Verification of the Charge Coupled Device Concept". Bell Sys. Tech. J. 49 (4): 593–600. ^ M. F. Tompsett, G. F. Amelio, and G. E. Smith (1 August 1970). "Charge Coupled 8-bit Shift Register". Applied Physics Lettersfrom 17 : 111–115. Bibcode 1970ApPhL..17..111T . doi : 10.1063/1.1653327 . ^ Tompsett, M.F.; Amelio, G.F.; Bertram, W.J., Jr.; Buckley, R.R.; McNamara, W.J.; Mikkelsen, J.C., Jr.; Sealer, D.A. (November 1971). "Charge-coupled imaging devices: Experimental results". IEEE Transactions on Electron Devices 18 (11): 992–996. doi : 10.1109/T-ED.1971.17321 . ISSN 0018-9383 . ^ Dobbin, Ben. (2005-09-08) Kodak engineer had revolutionary idea: the first digital camera . seattlepi.com. Retrieved on 2011-11-15. ^ globalsecurity.org - KH-11 KENNAN , 2007-04-24 ^ "NRO review and redaction guide (2006 ed.)" . National Reconnaissance Office. ^ Johnstone, B. (1999). We Were Burning: Japanese Entrepreneurs and the Forging of the Electronic Age . New York: Basic Books. ISBN 0-465-09117-2 ^ "Charles Stark Draper Award" ^ "Nobel Prize website" ^ For instance, the specsheet of PI/Acton's SPEC-10 camera specifies a dark current of 0.3 electron per pixel per hour at -110 °C. ^ a b c Sze, S. M. ; Ng, Kwok K. (2007). Physics of semiconductor devices (3 ed.). John Wiley and Sons . ISBN 978-0-471-14323-9 . Chapter 13.6. ^ Hainaut, Oliver R. (December 2006). "Basic CCD image processing" . Retrieved January 15, 2011 . Hainaut, Oliver R. (June 1, 2005). "Signal, Noise and Detection" . Retrieved October 7, 2009 . Hainaut, Oliver R. (May 20, 2009). "Retouching of astronomical data for the production of outreach images" . Retrieved October 7, 2009 . (Hainaut is an astronomer at the European Southern Observatory ) ^ F. J. Duarte , Tunable Laser Optics (Elsevier Academic, New York, 2003) Chapter 10. ^ As specified in Nüvü Caméras' EM N2 camera specsheet. ^ Daigle, Olivier; Djazovski, Oleg; Laurin, Denis; Doyon, René; Artigau, étienne (July 2012). Characterization results of EMCCDs for extreme low light imaging . ^ Phil Plait. "The Planet X Saga: SOHO Images" ^ Phil Plait. "Why, King Triton, how nice to see you!" ^ Thomas J. Fellers and Michael W. Davidson. "CCD Saturation and Blooming" ^ Albert J. P. Theuwissen (1995). Solid-State Imaging With Charge-Coupled Devices . Springer. p.177–180. ISBN 9780792334569 . External links Wikimedia Commons has media related to: Charge-coupled devices Journal Article On Basics of CCDs Nikon microscopy introduction to CCDs Concepts in Digital Imaging Technology CCDs for Material Scientists Micrograph of the photosensor array of a webcam. A general L3CCD page with many links Paper discussing the performance of L3CCDs Statistical properties of multiplication registers including derivation of the equation above More statistical properties L3CCDs used in astronomy CCD 求助编辑 百科名片(http://baike.baidu.com/view/18579.htm) CCD CCD,英文全称:Charge-coupled Device,中文全称: 电荷 耦合元件。可以称为CCD 图像 传感器。CCD是一种半导体器件,能够把 光学 影像转化为 数字信号 。 CCD上植入的微小光敏物质称作 像素 (Pixel)。一块CCD上包含的像素数越多,其提供的 画面 分辨率也就越高。CCD的作用就像胶片一样,但它是把图像像素转换成数字信号。CCD上有许多排列整齐的电容,能感应光线,并将影像转变成数字信号。经由外部电路的控制,每个小电容能将其所带的电荷转给它相邻的电容。 目录 简介 发展简介 功能特性 主要性能参数 应用 感应红外线的原因 CCD彩色数码相机 展开 简介 发展简介 功能特性 主要性能参数 应用 感应红外线的原因 CCD彩色数码相机 展开 编辑本段 简介 CCD 广泛应用在 数码摄影 、 天文学 ,尤其是光学遥测技术、光学与频谱望远镜,和高速 摄影 技术如Lucky imaging。CCD在 摄像机 、 数码相机 和 扫描仪 中应用广泛,只不过摄像机中使用的是 点阵 CCD,即包括x、y两个方向用于摄取平面 图像 ,而扫描仪中使用的是线性CCD,它只有x一个方向,y方向扫描由扫描仪的机械装置来完成。 编辑本段 发展简介 CCD发展史 CCD是于1969年由 美国 贝尔实验室(Bell Labs)的维拉· 波义耳 (Willard S. Boyle)和乔治· 史密斯 (George 常用ccd尺寸对比表 E. Smith)所发明的。当时贝尔实验室正在发展影像电话和 半导体 气泡式内存。将这两种新技术结合起来后, 波义耳 和史密斯得出一种装置,他们命名为“电荷‘气泡’元件”(Charge "Bubble" Devices)。这种装置的特性就是它能沿着一片半导体的表面传递电荷,便尝试用来做为记忆装置,当时只能从暂存器用“注入”电荷的方式输入记忆。但随即发现 光电效应 能使此种元件表面产生电荷,而组成数位影像。 到了70年代,贝尔实验室的 研究员 已经能用简单的线性装置捕捉影像,CCD就此诞生。有几家公司接续此一发明,着手进行进一步的研究,包括快捷半导体(Fairchild Semiconductor)、 美国无线电公司 (RCA)和德州仪器(Texas Instruments)。其中快捷半导体的产品率先上市,于1974年发表500单元的线性装置和100x100 像素 的平面装置。 以上为CCD发展历程 1、HAD感测器 HAD(HOLE-ACCUMULATION DIODE) 传感器 是在N型基板,P型,N+2极体的表面上,加上正孔蓄积层,这是SONY独特的构造。由于设计了这层正孔蓄积层,可以使感测器表面常有的暗电流问题获得解决。另外,在N型基板上设计电子可通过的垂直型隧道,使得开口率提高,换句换说,也提高了感度。在80年代初期,索尼将其领先使用在可变速电子快门产品中,在拍摄移动快速的物体也可获得清晰的图象。 2、ON-CHIP MICRO LENS 80年代后期,因为CCD中每一像素的缩小,将使得受光面积减少,感度也将变低。为改善这个问题,索尼在每一感光二极管前装上微小镜片,使用微小镜片后,感光面积不再因为感测器的开口面积而决定,而是以微小镜片的表面积来决定。所以在规格上提高了开口率,也使感亮度因此大幅提升。 3、SUPER HAD CCD 进入90年代后期以来,CCD的单位面积也越来越小,1989年开发的微小镜片技术,已经无法再提升感亮度,如果将CCD组件内部放大器的放大倍率提升,将会使杂讯也被提高,画质会受到明显的影响。索尼在CCD技术的研发上又更进一步,将以前使用微小镜片的技术改良,提升光利用率,开发将镜片的形状最优化技术,即索尼 SUPER HAD CCD技术。基本上是以提升光利用效率来提升感亮度的设计,这也为目前的CCD基本技术奠定了基础。 4、NEW STRUCTURE CCD 在摄影机的光学镜头的光圈F值不断的提升下,进入到摄影机内的斜光就越来越多,使得入射到CCD组件的光无法百分之百的被聚焦到感测器上,而CCD感测器的感度将会降低。1998年索尼公司为改善这个问题,将彩色滤光片和遮光膜之间再加上一层内部的镜片。加上这层镜片后可以改善内部的光路,使斜光也可以被聚焦到感光器。而且同时将硅基板和电极间的绝缘层薄膜化,让会造成垂直CCD画面杂讯的讯号不会进入,使SMEAR特性改善。 5、EXVIEW HAD CCD 比可视光波长更长的红外线光,也可以在半导体硅芯片内做光电变换。可是至当前为止,CCD无法将这些光电变换后的电荷,以有效的方法收集到感测器内。为此,索尼在1998年新开发的“EXVIEW HAD CCD”技术就可以将以前未能有效利用的近红外线光,有效转换成为映像资料而用。使得可视光范围扩充到红外线,让感亮度能 大幅提高。利用“EXVIEW HAD CCD”组件时,在黑暗的环境下也可得到高亮度的照片。而且之前在硅晶板深层中做的光电变换时,会漏出到垂直CCD部分的SMEAR成分,也可被收集到传感器内,所以影响画质的杂讯也会大幅降低 。 发明者荣誉 2006年元月,波义耳和史密斯获颁电机电子工程师学会( IEEE )颁发的Charles Stark Draper奖章,以表彰他们对CCD发展的贡献。 北京时间2009年10月6日, 2009年诺贝尔物理学奖 揭晓, 瑞典皇家科学院 诺贝尔奖 委员会宣布将该奖项授予一名 中国 香港 科学家 高锟 (Charles K. Kao)和两名科学家维拉·博伊尔(Willard S. Boyle)和 乔治·史密斯 (George E. Smith)。科学家Charles K. Kao 因为“在光学通信领域中光的传输的开创性成就” 而获奖,科学家因博伊尔和乔治-E-史密斯因“发明了成像半导体电路—— 电荷藕合器件 图像 传感器 CCD” 获此殊荣。 编辑本段 功能特性 CCD图像传感器 可直接将光学信号转换为模拟电流信号,电流信号经过放大和模数转换,实现图像的获取、存储、传输、处理和复现。其显著特点是:1.体积小重量轻;2.功耗小,工作电压低,抗冲击与震动,性能稳定,寿命长;3.灵敏度高,噪声低, 动态范围 大;4.响应速度快,有自扫描功能,图像畸变小,无残像;5.应用 超大规模集成电路 工艺技术生产,像素集成度高,尺寸精确,商品化生产成本低。因此,许多采用光学方法测量外径的仪器,把CCD器件作为光电接收器。 CCD工作原理 CCD从功能上可分为 线阵CCD 和面阵CCD两大类。线阵CCD通常将CCD内部电极分成 数组 ,每组称为一相,并施加同样的 时钟脉冲 。所需相数由CCD芯片内部结构决定,结构相异的CCD可满足不同场合的使用要求。线阵CCD有单沟道和 双沟 道之分,其光敏区是MOS电容或 光敏二极管 结构,生产工艺相对较简单。它由光敏区阵列与移位寄存器扫描电路组成,特点是处理信息速度快,外围电路简单,易实现实时控制,但获取信息量小,不能处理复杂的图像(线阵CCD如右图所示)。面阵CCD的结构要复杂得多,它由很多光敏区排列成一个方阵,并以一定的形式连接成一个器件,获取信息量大,能处理复杂的图像。 编辑本段 主要性能参数 1.光谱灵敏度 CCD的光谱灵敏度取决于量子效率、波长、积分时间等参数。量子效率表征CCD芯片对不同波长光信号的光电转换本领。不同工艺制成的CCD芯片,其量子效率不同。灵敏度还与光照方式有关,背照CCD的量子效率高,光谱相应曲线无起伏,正照CCD由于反射和吸收损失,光谱相应曲线上存在若干个峰和谷。 2.CCD的暗电流与噪声 CCD暗电流是内部热激励载流子造成的。CCD在低帧频工作时,可以几秒或几千秒的累积(曝光)时间来采集低亮度图像,如果曝光时间较长,暗电流会在光电子形成之前将势阱填满热电子。由于晶格点阵的缺陷,不同像素的暗电流可能差别很大。在曝光时间较长的图像上,会产生一个星空状的固定噪声图案。这种效应是因为少数像素具有反常的较大暗电流,一般可在记录后从图像中减去,除非暗电流已使势阱中的电子达到饱和。 晶格点阵的缺陷产生不能收集光电子的死像素。由于电荷在移出芯片的途中要穿过像素,一个死像素就会导致一整列中的全部或部分像素无效;过渡曝光会使过剩的光电子蔓延到相邻像素,导致图像扩散性模糊。 3.转移效率和转移损失率 电荷包从一个势阱向另一个势阱转移时,需要一个过程。像素中的电荷在离开芯片之前要在势阱间移动上千次或更多,这要求电荷转移效率极其高,否则光电子的有效数目会在读出过程中损失严重。 引起电荷转移不完全的主要原因是表面态对电子的俘获,转移损失造成信号退化。采用“胖零”技术可减少这种损耗。 4.时钟频率的上、下限 下限取决于非平衡载流子的平均寿命,上限取决于电荷包转移的损失率,即电荷包的转移要有足够的时间。 5.动态范围 表征同一幅图像中最强但未饱和点与最弱点强度的比值。数字图像一般用DN表示。 6.非均匀性 表征CCD芯片全部像素对同一波长、同一强度信号响应能力的不一致性。 7.非线性度 表征CCD芯片对于同一波长的输入信号,其输出信号强度与输入信号强度比例变化的不一致性。 8.时间常数 表征探测器响应速度,也表示探测器响应的调制辐射能力。时间常数与光导和光伏探测器中的自由载流子寿命有关。 9.CCD芯片像素缺陷 a.像素缺陷:对于在50%线性范围的照明,若像素响应与其相邻像素偏差查过30%,则为像素缺陷。 b.簇缺陷:在3*3像素的范围内,缺陷数超过5个像素。 c.列缺陷:在1*12的范围内,列的缺陷超过8个像素。 d.行缺陷:在一组水平像素内,行的缺陷超过8个像素。 编辑本段 应用 四十年来,CCD器件及其应用技术的研究取得了惊人的进展,特别是在图像传感和 非接触测量 领域的发展更为迅速。随着CCD技术和理论的不断发展,CCD技术应用的广度与深度必将越来越大。CCD是使用一种 高感光度 的半导体材料集成,它能够根据照射在其面上的光线产生相应的电荷信号,在通过模数转换器芯片转换成“0”或“1”的 数字信号 ,这种数字信号经过压缩和程序排列后,可由 闪速存储器 或硬盘卡保存即收光信号转换成计算机能识别的电子图像信号,可对被测物体进行准确的测量、分析。 含格状排列像素的CCD应用于数码相机、光学扫瞄仪与 摄影机 的 感光元件 。其光效率可达70%(能捕捉到70%的入射光),优于传统菲林(底片)的2%,因此CCD迅速获得天文学家的大量采用。 传真机 所用的线性CCD影像经透镜成像于电容阵列表面后,依其亮度的强弱在每个电容单位上形成强弱不等的电荷。传真机或扫瞄仪用的线性CCD每次捕捉一细长条的光影,而 数码相机 或摄影机所用的平面式CCD则一次捕捉一整张影像,或从中撷取一块方形的区域。一旦完成曝光的动作,控制电路会使电容单元上的电荷传到相邻的下一个单元,到达边缘最后一个单元时,电荷讯号传入放大器,转变成电位。如此周而复始,直到整个影像都转成电位,取样并数位化之后存入内存。储存的影像可以传送到 打印机 、储存设备或 显示器 。 在数码相机领域,CCD的应用更是异彩纷呈。一般的彩色数码相机是将 拜尔 滤镜(Bayer filter)加装在CCD上。每四个像素形成一个单元,一个负责过滤红色、一个过滤蓝色,两个过滤绿色(因为人眼对绿色比较敏感)。结果每个像素都接收到感光讯号,但色彩分辨率不如感光分辨率。 用三片CCD和分光棱镜组成的 3CCD系统 能将颜色分得更好,分光棱镜能把入射光分析 成红 、蓝、绿三种色光,由三片CCD各自负责其中一种色光的呈像。所有的专业级数位摄影机,和一部份的半专业级数位摄影机采用3CCD技术。目前,超高分辨率的CCD芯片仍相当昂贵,配备3CCD的高解析静态 照相机 ,其价位往往超出许多专业摄摄影者的预算。因此有些高档相机使用旋转式色彩滤镜,兼顾高分辨率与忠实的色彩呈现。这类多次成像的照相机只能用于拍摄静态物品。 经冷冻的CCD同时在1990年代初亦广泛应用于天文摄影与各种夜视装置,而各大型天文台亦不断研发高像数CCD以拍摄极高解像之天体照片。 CCD在天文学方面有一种奇妙的应用方式,能使固定式的望远镜发挥有如带追踪望远镜的功能。方法是让CCD上电荷读取和移动的方向与天体运行方向一致,速度也同步,以CCD导星不仅能使望远镜有效纠正追踪误差,还能使望远镜记录到比原来更大的视场。 一般的CCD大多能感应红外线,所以衍生出红外线影像、夜视装置、零照度(或趋近零照度)摄影机/照相机等。为了减低红外线干扰,天文用CCD常以液态 氮 或半导体冷却,因室温下的物体会有红外线的黑体辐射效应。CCD对 红外线 的敏感度造成另一种效应,各种配备CCD的数码相机或录影机若没加装红外线滤镜,很容易拍到遥控器发出的红外线。降低温度可减少电容阵列上的 暗电流 ,增进CCD在低照度的敏感度,甚至对紫外线和可见光的敏感度也随之提升(信噪比提高)。 温度噪声、暗电流(dark current)和宇宙 辐射 都会影响CCD表面的像素。 天文学家 利用快门的开阖,让CCD多次曝光,取其平均值以缓解干扰效应。为去除背景噪声,要先在快门关闭时取影像讯号的平均值,即为"暗框"(dark frame)。然后打开快门,取得影像后减去暗框的值,再滤除系统噪声(暗点和亮点等等),得到更清晰的细节。 天文摄影所用的冷却 CCD照相机 必须以接环固定在成像位置,防止外来光线或震动影响;同时亦因为大多数影像平台生来笨重,要拍摄星系、星云等暗弱天体的影像,天文学家利用"自动导星"技术。大多数的自动导星系统使用额外的不同轴CCD监测任何影像的偏移,然而也有一些系统将主镜接驳在拍摄用之 CCD相机 上。以光学装置把主镜内部份星光加进相机内另一颗CCD导星装置,能迅速侦测追踪天体时的微小误差,并自动调整驱动马达以矫正误差而不需另外装置导星。 一组用于紫外线影像处理用的CCD 编辑本段 感应红外线的原因 其实在CCD中,本来就对红外光有感应,能看到红外线,例如:使用黑白 摄像机 ,在关掉明亮电灯的情况下,开启红外灯,马上可以看到影像。这是由于黑白 摄像机 本来就没颜色,但在现实使用的彩色CCD多数看不到红外线。其实,彩色CCD也能识别和感应到红外线,但会干扰到D.S.P (影像处理主芯片)的运算以导致”偏色”,因此,在彩色CCD中为了让其不“偏色”,在彩色CCD上头黏的那片滤光片,让它不能接收红外线。 从380nm-645nm 穿透率是约93%,刚好就是可见光的范围(紫-靛-蓝-绿-黄-橙-红),就是彩虹的颜色嘛! 600多nm是红色光,在它往右以”外”,就叫”红外线”,是”红色以外的光” 不是红色的光,因为眼睛已经看不到了,再来,380nm左右我们眼睛看到的是紫色,在380nm往左以”外”,就叫” 紫外线 ”. 编辑本段 CCD彩色数码相机 一般的彩色数码相机是将拜尔滤镜(Bayer filter)加装在CCD上。每四个像素形成一个单元,一个负责过滤红色、一个过滤蓝色,两个过滤绿色(因为人眼对绿色比较敏感)。结果每个像素都接收到感光讯号,但色彩分辨率不如感光分辨率。 用三片CCD和分光棱镜组成的3CCD系统能将颜色分得更好,分光棱镜能把入射光分析成红、蓝、绿三种色光,由三片CCD各自负责其中一种色光的呈像。所有的专业级数位摄影机,和一部份的半专业级数位摄影机采用3CCD技术。 截至2005年,超高分辨率的CCD芯片仍相当昂贵,配备3CCD的高解析静态照相机,其价位往往超出许多专业摄影者的预算。因此有些高档相机使用旋转式色彩滤镜,兼顾高 分辨率 与忠实的色彩呈现。这类多次成像的照相机只能用于拍摄静态物品。 CCD它使用一种高感光度的半导体材料制成,能把光线转变成电荷,通过模数转换器芯片转换成数字信号,数字信号经过压缩以后由相机内部的闪速存储器或内置硬盘卡保存,因而可以轻而易举地把数据传输给计算机,并借助于计算机的处理手段,根据需要和想像来修改图像。CCD由许多感光单位组成,通常以百万像素为单位。当CCD表面受到光线照射时,每个感光单位会将电荷反映在组件上,所有的感光单位所产生的信号加在一起,就构成了一幅完整的 画面 。 CCD在摄像机里是一个极其重要的部件,它起到将光线转换成电信号的作用,类似于人的眼睛,因此其性能的好坏将直接影响到摄像机的性能。 衡量CCD好坏的指标很多,有像素数量, CCD尺寸 ,灵敏度,信噪比等,其中像素数以及CCD尺寸是重要的指标。像素数是指CCD上感光元件的数量。摄像机拍摄的画面可以理解为由很多个小的点组成,每个点就是一个像素。显然,像素数越多,画面就会越清晰,如果CCD没有足够的像素的话,拍摄出来的画面的 清晰度 就会大受影响,因此,理论上CCD的像素数量应该越多越好。但CCD像素数的增加会使制造成本以及成品率下降,而且在现行电视标准下,像素数增加到某一数量后,再增加对拍摄画面清晰度的提高效果变得不明显,因此,一般一百万左右的像素数对一般的使用已经足够了。 单CCD摄像机是指摄像机里只有一片CCD并用其进行亮度信号以及彩色信号的光电转换,其中色度信号是用CCD上的一些特定的彩色遮罩装置并结合后面的电路完成的。由于一片CCD同时完成亮度信号和色度信号的转换,因此难免两全,使得拍摄出来的图像在彩色还原上达不到专业水平的要求。为了解决这个问题,便出现了3CCD摄像机。3CCD,顾名思义,就是一台摄像机使用了3片CCD。我们知道,光线如果通过一种特殊的棱镜后,会被分为红,绿,蓝三种颜色,而这三种颜色就是我们电视使用的 三基色 ,通过这三基色,就可以产生包括亮度信号在内的所有电视信号。如果分别用一片CCD接受每一种颜色并转换为电信号,然后经过电路处理后产生图像信号,这样,就构成了一个3CCD系统。 和单CCD相比,由于3CCD分别用3个CCD转换红,绿,蓝信号,拍摄出来的图像从彩色还原上要比单CCD来的自然,亮度以及清晰度也比单CCD好。但由于使用了三片CCD,3CCD摄像机的价格要比单CCD贵很多。 四色CCD是 索尼公司 在2003年推出的一种CCD新技术。四色即红 绿 蓝 品红(RGBE)相对与传统的三色(红 绿 蓝),四色CCD的 色彩还原 错误率进一步降低。因而使色彩还原更逼真。首款采用四色CCD的数码相机是SONY DSC—F828 一款面阵CCD 数码相机规格表中的CCD一栏经常写着“1/2.7英寸CCD”等。这里的“1/2.7英寸”就是CCD的尺寸,实际上就是CCD对角线的长度。 现有的数码相机一般采用1/2.7英寸、1/2.5英寸和1/1.8英寸等尺寸的CCD。CCD是受光元件(像素)的集合体,接收透过镜头的光并将其转换为电信号。在像素数一样的情况下, CCD尺寸 越大单位像素就越大。这样,单位像素可以收集更多的光线,因此,理论上可以说有利于提高画质。 但是,数码相机画质的好坏不仅是由CCD决定的。镜头以及通过CCD输出的电信号形成图像的电路的性能等也能够影响到相机的画质。所谓的“大尺寸CCD=高画质”是不正确的。例如,虽然1/2.7英寸比1/1.8英寸尺寸小,但配备1/2.7英寸CCD的数码相机并没有受到画质不好的批评。 现在,袖珍数码相机日趋小巧轻便,出于设计上的考虑,其中大多采用1/2.7英寸的小型CCD。 顺便说一句,1/2.7英寸的“型”有时也写作“inch”,不过,在这里不是普通的“1英寸=25.4mm”。由于结合了CCD亮相前摄像机上使用的摄像管和显示方式,因此,习惯上采用比较特殊的尺寸。1/2.7英寸为6.6mm,1/1.8英寸约为9mm。 编辑本段 CCD数码摄像机 选择和分类 CCD结构及工作原理 (来源于 中国仪器超市 )的资料: CCD结构包含感光二极管、并行信号积存器、并行信号寄存器、 信号放大器 、数摸转换器等项目,将分别叙述如下; 1. 感光二极管(Photodiode) 2. 并行信号积存器(Shift Register):用于暂时储存感光后产生的电荷。 3. 并行信号寄存器(Transfer Register):用于暂时储存并行积存器的模拟信号并将电荷转移放大。 4. 信号放大器:用于放大微弱电信号。 5. 数摸转换器:将放大的电信号转换成数字信号。 CCD的工作原理由微型镜头、 分色滤色片 、感光层等三层,将分别叙述如下; 1. 微型镜头 微型镜头为CCD的第一层,我们知道,数码相机成像的关键是在于其感光层,为了扩展CCD的采光率,必须扩展单一像素的受光面积。但是提高采光率的办法也容易使画质下降。这一层“微型镜头”就等于在感光层前面加上一副眼镜。因此 感光面积 不再因为传感器的开口面积而决定,而改由微型镜片的表面积来决定。 2. 分色滤色片 分色滤色片为CCD的第二层,目前有两种分色方式,一是RGB原色分色法,另一个则是CMYK补色分色法这两种方法各有优缺点。首先,我们先了解一下两种分色法的概念,RGB即三原色分色法,几乎所有人类眼睛可以识别的颜色,都可以通过红、绿和蓝来组成,而RGB三个字母分别就是Red,Green和Blue,这说明RGB分色法是通过这三个通道的颜色调节而成。再说CMYK,这是由四个通道的颜色配合而成,他们分别是青(C)、洋红(M)、黄(Y)、黑(K)。在印刷业中, CMYK 更为适用,但其调节出来的颜色不及RGB的多。 原色CCD的优势在于画质锐利,色彩真实,但缺点则是噪声问题。因此,大家可以注意,一般采用原色CCD的数码相机,在 ISO感光度 上多半不会超过400。相对的,补色CCD多了一个Y黄色滤色器,在色彩的分辨上比较仔细,但却牺牲了部分影像的分辨率,而在ISO值上,补色CCD可以容忍较高的感光度,一般都可设定在800以上 3. 感光层 感光层为CCD的第三层,这层主要是负责将穿过滤色层的光源转换成电子信号,并将信号传送到影像处理芯片,将影像还原。 CCD芯片就像人的视网膜,是 摄像头 的核心。目前中国尚无能力制造,市场上大部分摄像头采用的是 日本 SONY、SHARP、 松下 、富士等公司生产的芯片,现在韩国三星等也有能力生产,但质量就要稍逊一筹。因为芯片生产时产生不同等级,各厂家获得途径不同等原因,造成CCD采集效果也大不相同。在购买时,可以采取如下方法检测:接通 电源 ,连接视频电缆到 监视器 ,关闭镜头 光圈 ,看图像全黑时是否有亮点,屏幕上雪花大不大,这些是检测CCD芯片最简单直接的方法,而且不需要其它专用仪器。然后可以打开光圈,看一个静物,如果是彩色摄像头,最好摄取一个色彩鲜艳的物体,查看监视器上的图像是否偏色,扭曲,色彩或灰度是否平滑。好的CCD可以很好的还原景物的色彩,使物体看起来清晰自然;而残次品的图像就会有偏色现象,即使面对一张白纸,图像也会显示蓝色或红色。个别CCD由于生产车间的灰尘,CCD靶面上会有杂质,在一般情况下,杂质不会影响图像,但在弱光或显微摄像时,细小的灰尘也会造成不良的后果,如果用于此类工作,一定要仔细挑选。 1.依成像色彩划分 彩色摄像机 :适用于景物细部辨别,如辨别衣着或景物的颜色。 黑白摄像机 :适用于光线不充足地区及夜间无法安装照明设备的地区,在仅监视景物的位置或移动时,可选用黑白摄像机。对于成像要求较高的科学研究,一般也会选择黑白相机,因为很多相机拍摄出来的图片比 彩色照片 更接近真实的物体(因为彩色图片都是经过滤光片处理过的图片,而黑白照片是由未处理的光线形成的照片) 2.依分辨率灵敏度等划分 影像像素在38万以下的为一般型,其中尤以25万像素(512*492)、分辨率为400线的产品最普遍。 影像像素在38万以上的高分辨率型。 3.按CCD靶面大小划分 CCD芯片已经开发出多种尺寸: 目前采用的芯片大多数为1/3”和1/4”。在购买摄像头时,特别是对摄像角度有比较严格要求的时候,CCD靶面的大小,CCD与镜头的配合情况将直接影响 视场角 的大小和图像的清晰度。 1英寸——靶面尺寸为宽12.7mm*高9.6mm,对角线16mm。 2/3英寸——靶面尺寸为宽8.8mm*高6.6mm,对角线11mm。 1/2英寸——靶面尺寸为宽6.4mm*高4.8mm,对角线8mm。 1/3英寸——靶面尺寸为宽4.8mm*高3.6mm,对角线6mm。 1/4英寸——靶面尺寸为宽3.2mm*高2.4mm,对角线4mm。 4.按扫描制式划分 PAL制、NTSC制。中国采用隔行扫描( PAL )制式(黑白为CCIR),标准为625行,50场,只有医疗或其它专业领域才用到一些非标准制式。另外,日本为 NTSC 制式,525行,60场(黑白为EIA)。 5.依 供电电源 划分 110VAC(NTSC制式多属此类); 220VAC 24VAC 12VDC 9VDC( 微型摄像机 多属此类)。 6.按同步方式划分 内同步:用摄像机内 同步信号 发生电路产生的同步信号来完成操作。 外同步:使用一个外同步信号发生器,将同步信号送入摄像机的 外同步输入 端。 功率同步(线性锁定,line lock):用摄像机AC电源完成垂直推动同步。 外VD同步:将摄像机 信号电缆 上的VD同步脉冲输入完成外VD同步。 多台摄像机外同步:对多台摄像机固定外同步,使每一台摄像机可以在同样的条件下作业,因各摄像机同步,这样即使其中一台摄像机转换到其他景物,同步摄像机的画面亦不会失真。 7.按照度划分,CCD又分为: 普通型 正常工作所需照度1~3LUX 月光型 正常工作所需照度0.1LUX左右 星光型 正常工作所需照度0.01LUX以下 红外型 采用红外灯照明,在没有光线的情况下也可以成像 主要技术指标 CCD尺寸,亦即摄像机靶面。原多为1/2英寸,现在1/3英寸的已普及化,1/4英寸和1/5英寸也已商品化。 CCD像素 ,是CCD的主要性能指标,它决定了显示图像的清晰程度,分辨率越高,图像细节的表现越好。CCD是由面阵感光元素组成,每一个元素称为像素,像素越多,图像越清晰。现在市场上大多以25万和38万像素为划界,38万像素以上者为高清晰度摄像机。 水平分辨率 。彩色摄像机的典型分辨率是在320到500电视线之间,主要有330线、380线、420线、460线、500线等不同档次。分辨率是用电视线(简称线TV LINES)来表示的,彩色摄像头的分辨率在330~500线之间。分辨率与CCD和镜头有关,还与摄像头电路通道的频带宽度直接相关,通常规律是1MHz的频带宽度相当于清晰度为80线。频带越宽,图像越清晰,线数值相对越大。 最小照度,也称为灵敏度。是CCD对环境光线的敏感程度,或者说是CCD正常成像时所需要的最暗光线。照度的单位是勒克斯(LUX),数值越小,表示需要的光线越少,摄像头也越灵敏。月光级和星光级等高增感度摄像机可工作在很暗条件,2~3lux属一般照度,现在也有低于1lux的普通摄像机问世。 扫描制式。有PAL制和NTSC制之分。 摄像机电源。交流有220V、110V、24V,直流为12V 或9V。 信噪比。典型值为46db,若为50db,则图像有少量噪声,但图像质量良好;若为60db,则图像质量优良,不出现噪声。 视频输出。多为1Vp-p、75Ω,均采用BNC接头。 镜头安装方式。有C和CS方式,二者间不同之处在于感光距离不同。 可调整功能 同步方式的选择 A、对单台摄像机而言,主要的同步方式有下列三种: 内同步——利用摄像机内部的晶体振荡电路产生同步信号来完成操作。 外同步——利用一个外同步信号发生器产生的同步信号送到摄像机的外同步输入端来实现同步。 电源同步——也称之为线性锁定或行锁定,是利用摄像机的交流电源来完成垂直推动同步,即摄像机和电源零线同步。 B、对于多摄像机系统,希望所有的视频输入信号是垂直同步的,这样在变换摄像机输出时,不会造成画面失真,但是由于多摄像机系统中的各台摄像机供电可能取自 三相电源 中的不同相位,甚至整个系统与交流电源不同步,此时可采取的措施有: 均采用同一个外同步信号发生器产生的同步信号送入各台摄像机的外同步输入端来调节同步。 调节各台摄像机的“相位调节”电位器,因摄像机在出厂时,其垂直同步是与交流电的上升沿正过零点同相的,故使用相位延迟电路可使每台摄像机有不同的相移,从而获得合适的垂直同步,相位调整范围0~360度。 自动增益控制 所有摄像机都有一个将来自CCD的 信号放大 到可以使用水准的 视频放大器 ,其放大量即增益,等效于有较高的灵敏度,可使其在微光下灵敏,然而在亮光照的环境中放大器将过载,使视频信号畸变。为此,需利用摄像机的自动增益控制(AGC)电路去探测视频信号的电平,适时地开关AGC,从而使摄像机能够在较大的光照范围内工作,此即动态范围,即在低照度时自动增加摄像机的灵敏度,从而提高图像信号的强度来获得清晰的图像。 背景光补偿 通常,摄像机的AGC工作点是通过对整个视场的内容作平均来确定的,但如果视场中包含一个很亮的背景区域和一个很暗的前景目标,则此时确定的AGC工作点有可能对于前景目标是不够合适的,背景光补偿有可能改善前景目标显示状况。 当背景光补偿为开启时,摄像机仅对整个视场的一个子区域求平均来确定其AGC工作点,此时如果前景目标位于该子区域内时,则前景目标的可视性有望改善。 电子快门 在 CCD摄像机 内,是用光学电控影像表面的电荷积累时间来操纵快门。电子快门控制摄像机CCD的累积时间,当电子快门关闭时,对NTSC摄像机,其CCD累积时间为1/60秒;对于PAL摄像机,则为1/50秒。当摄像机的电子快门打开时,对于NTSC摄像机,其电子快门以261步覆盖从1/60秒到1/10000秒的范围;对于PAL型摄像机,其电子快门则以311步覆盖从1/50秒到1/10000秒的范围。当电子 快门速度 增加时,在每个视频场允许的时间内,聚焦在CCD上的光减少,结果将降低摄像机的灵敏度,然而,较高的快门速度对于观察 运动图像 会产生一个“停顿动作”效应,这将大大地增加摄像机的动态分辨率。 白平衡 白平衡只用于彩色摄像机,其用途是实现摄像机图像能精确反映景物状况,有手动白平衡和 自动白平衡 两种方式。 A、自动白平衡 连续方式——此时白平衡设置将随着景物色彩温度的改变而连续地调整,范围为2800~6000K。这种方式对于景物的色彩温度在拍摄期间不断改变的场合是最适宜的,使色彩表现自然,但对于景物中很少甚至没有白色时,连续的白平衡不能产生最佳的彩色效果。 按钮方式——先将摄像机对准诸如白墙、白纸等白色目标,然后将自动方式开关从手动拨到设置位置,保留在该位置几秒钟或者至图像呈现白色为止,在白平衡被执行后,将自动方式开关拨回手动位置以锁定该白平衡的设置,此时白平衡设置将保持在摄像机的 存储器 中,直至再次执行被改变为止,其范围为2300~10000K,在此期间,即使摄像机断电也不会丢失该设置。以按钮方式设置白平衡最为精确和可靠,适用于大部分应用场合。 B、手动白平衡 开手动白平衡将关闭自动白平衡,此时改变图像的红色或蓝色状况有多达107个等级供调节,如增加或减少红色各一个等级、增加或减少蓝色各一个等级。除次之外,有的摄像机还有将白平衡固定在3200K(白炽灯水平)和5500K(日光水平)等档次命令。 色彩调整 对于大多数应用而言,是不需要对摄像机作色彩调整的,如需调整则需细心调整以免影响其他色彩,可调色彩方式有: 红色—黄色色彩增加,此时将红色向洋红色移动一步。 红色—黄色色彩减少,此时将红色向黄色移动一步。 蓝色—黄色色彩增加,此时将蓝色向青蓝色移动一步。 蓝色—黄色色彩减少,此时将蓝色向洋红色移动一步。 CCD数码摄像机参数: ●像素:这个是常见的参数。在芯片确定的情况下,像素越高,灵敏度越低,两者是反比关系,所以像素不是越高越好,在像素够用的情况下应尽量优先确保灵敏度。 ●动态范围:实际上这个参数取决于另外2个参数。动态范围=20Xlog10(满井电子/总噪音)这个参数越高也表征CCD的灵敏度越高。 ●满井电子:从动态范围的计算看的出来,满井电子数越大越好。 ●噪音:简单理解就是杂信号,有读出噪声和暗噪声,读出噪声相机电子元件处理图象时的额外噪音,与电子效率有关。 ●制冷:CCD工作时温度会升高,这会产生噪音,尤其是长时间曝光(若荧光拍摄等情况需要较长的曝光时间),如果把温度降低,可以减少这类噪音,所以大家看到有冷CCD。制冷方式有很多,比如装风扇、半导体制冷、水循环制冷,还有用液氮制冷的,制冷越低,降噪越好,但是成本也就越高。 ●灰阶:一般是写的多少bit,这个值高点好些,这样在一些层次比较多或者不容易区分的图片的拍摄上会有帮助,常见的是医院血液科的血涂片拍摄:红血球非常薄而且多,经常在镜下观察时会发现有不少是有重叠的,人眼还比较好区分重叠的部分,但是换到CCD上面的话,基本需要12bit以上了,最好是14bit的。对于做灰度分析或者荧光定量分析的,灰阶还是高点好。 ●芯片尺寸:因为像素和灵敏度的反比关系,所以芯片尺寸自然是大的好些。 ●速度:这个自然是越快越好,不过要注意区分:速度分为读出速度,预览速度,采集速度;读出速度高不一定预览、采集就快,因为它还受后面接口、电脑等的影响;预览速度受分辨率影响,采集速度相对好点,因为他的变动基本上就只有电脑配置高低影响了。 ●接口:最常用的是 USB接口,1394其次,还有就是串口。 ●binning:这是提高CCD预览、采集的常见方法,支持的binning越高,速度也就能提的更高,不过会牺牲分辨率——其实它就是把几个像素当作一个像素计算,比如2X2,就是把4个像素当作一个像素;11、曝光时间:支持的时间越长,在拍摄弱光的时候会好些;至于说最小曝光时间,原理上可以侧面反应CCD的灵敏度,但是需要参考的条件比较多。 ●GAIN:一个信号放大的参数,GAIN越大,所需要的曝光时间也就越短,但是相应的噪音也就会增加。 主要技术参数解释 1. 什么是CCD摄像机? CCD是Charge Coupled Device( 电荷耦合器件 )的缩写,它是一种半导体成像器件,因而具有灵敏度高、抗强光、畸变小、体积小、寿命长、抗震动等优点。 2. CCD摄像机的工作方式 被摄物体的图像经过镜头聚焦至CCD芯片上,CCD根据光的强弱积累相应比例的电荷,各个像素积累的电荷在视频时序的控制下,逐点外移,经滤波、放大处理后,形成视频信号输出。视频信号连接到监视器或电视机的视频输入端便可以看到与原始图像相同的视频图像。 3. 分辨率的选择 评估摄像机分辨率的指标是水平分辨率,其单位为线对,即成像后可以分辨的黑白线对的数目。常用的黑白摄像机的分辨率一般为380-600,彩色为380-480,其数值越大成像越清晰。一般的监视场合,用400线左右的黑白摄像机就可以满足要求。而对于医疗、 图像处理 等特殊场合,用600线的摄像机能得到更清晰的图像。 4. 成像 灵敏度 通常用最低环境照度要求来表明摄像机灵敏度,黑白摄像机的灵敏度大约是0.02-0.5Lux(勒克斯),彩色摄像机多在1Lux以上。 0.1Lux 的摄像机用于普通的监视场合;在夜间使用或环境光线较弱时,推荐使用0.02Lux的摄像机。与 近红外 灯配合使用时,也必须使用低照度的摄像机。另外摄像的灵敏度还与镜头有关,0.97Lux/F0.75相当于2.5Lux/F1.2相当于3.4Lux/F1.参考环境照度: 夏日阳光下 100000Lux 阴天室外 10000Lux 电视台演播室 1000Lux 距60W台灯60cm 桌面 300Lux 室内 日光灯 100Lux 黄昏室内 10Lux 20cm处烛光 10-15Lux 夜间路灯 0.1Lux 5. 电子快门 电子快门的时间在1/50-1/100000秒之间,摄像机的电子快门一般设置为自动电子快门方式,可根据环境的亮暗自动调节快门时间,得到清晰的图像。有些摄像机允许用户自行手动调节快门时间,以适应某些特殊应用场合。 6. 外同步与外触发 外同步是指不同的视频设备之间用同一同步信号来保证视频信号的同步,它可保证不同的设备输出的视频信号具有相同的帧、行的起止时间。为了实现外同步,需要给摄像机输入一个复合同步信号(C-sync)或 复合视频信号 。外同步并不能保证用户从指定时刻得到完整的连续的一帧图像,要实现这种功能,必须使用一些特殊的具有外触发功能的摄像机。 7. 光谱响应特性 CCD器件由硅材料制成,对近红外比较敏感,光谱响应可延伸至1.0um左右。其响应峰值为绿光(550nm),分布 曲线 如右图所示。夜间隐蔽监视时,可以用近红外灯照明,人眼看不清环境情况,在监视器上却可以清晰成像。由于 CCD传感器 表面有一层吸收紫外的透明电极,所以CCD对紫外不敏感。彩色摄像机的成像单元上有红、绿、兰三色滤光条,所以彩色摄像机对红外、紫外均不敏感。 8. CCD芯片的尺寸 CCD的成像尺寸常用的有1/2"、1/3"等,成像尺寸越小的摄像机的体积可以做得更小些。在相同的 光学镜头 下,成像尺寸越大,视场角越大。芯片规格 成像面大小(宽X高) 对角线 1/2 6.4x4.8mm 8mm 1/3 4.8x3.6mm 6mm 9 .像素:这个是常见的参数。在芯片确定的情况下,像素越高,灵敏度越低,两者是反比关系,所以像素不是越高越好,在像素够用的情况下应尽量优先确保灵敏度。 其它问题 对于细节没有写清楚。首先,对于光线的处理没有写清楚,包括微型镜头是一个什么样的镜头(凸透镜?),光线汇聚到 象素 ?其次,对于分色 滤色片 的描述更模糊,如果是RGB,是有三个滤色片还是一个滤色片分时控制过虑的颜色来处理不同颜色的亮度?如果是三个滤色片,肯定会分为三层,每层要加上一个象素,这种方案基本可以否决。因此,应该是分时控制滤色,这样的一个后果是比3CC的处理速度要慢很多(因为要控制滤色片的滤色),还要考虑一个区别就是通过控制滤色片的滤色效果是否有静态滤色片(暂时称为镜头滤色片,不能通过控制动态滤色)滤色效果好,这可能就是3CCD单CCD在成像上的区别。最后,对于3CCD的象素计算和单CCD如何对比也没有说明。3CCD的原理是通过三棱镜分光(RGB),然后投射的不同的CCD上面(个人认为3CCD和单CCD使用的CCD应该不是一样的,3CCD使用的可能没有滤色片,当然,也可以使用和单CCD一样有滤色片的,这样成本可能增加),这样的一个后果是由一个CCD的象素决定了整个拍摄画面的象素,而并不是厂家吹嘘的画面象素是单个CCD×3。这样一来,松下的3CCD实际上是以牺牲画面象素来换取色彩还原。象素当然可以通过数学插值的方式来补充,所以,对外看到的画面象素和其他的单CCD的画面象素一样,如果放大,可能3CCD的画面就比单CCD(同样象素)的模糊,不知道有人测试过没有。 关于CCD格式: CCD文件是CloneCD生成的文本,记录着CD/DVD光盘镜像的属性。CCD文件仅仅是光盘镜像文件的说明文件,所以必须配合光盘镜像使用,如IMG+CCD+SUB。 可以使用WinMount打开。 编辑本段 CCD工业相机类型大观 CCD是60年代末期由 贝尔 试验室发明。开始作为一种新型的PC存储电路,很快CCD具有许多其他潜在的应用,包括信号和图像(硅的光敏性)处理。 CCD 是在薄的硅晶片上处理一系列不同的功能,在每一个硅晶片上分布几个相同的IC等可产生功能的元件,被选择的IC从硅晶片上切下包装在载体里用在系统上。总结下来,CCD主要有以下几种类型: 一、面阵CCD: 允许拍摄者在任何快门速度下一次曝光拍摄移动物体。 二、线阵CCD: 用一排像素扫描过图片,做三次曝光——分别对应 于红 、绿、蓝 三色滤镜,正如名称所表示的,线性传感器是捕捉一维图像。初期应用于广告界拍摄静态图像,线性阵列,处理高分辨率的图像时,受局限于非移动的连续光照的物体。 三、三线传感器CCD: 在三线传感器中,三排并行的像素分别覆盖RGB滤镜,当捕捉彩色图片时,完整的彩色图片由多排的像素来组合成。三线CCD传感器多用于高端数码相机,以产生高的分辨率和光谱色阶。 四、交织传输CCD: 这种传感器利用单独的阵列摄取图像和电量转化,允许在拍摄下一图像时在读取当前图像。交织传输CCD通常用于低端数码相机、摄像机和拍摄动画的广播拍摄机。 五、全幅面CCD: 此种CCD具有更多电量处理能力,更好动态范围,低噪音和传输光学分辨率,全幅面CCD允许即时拍摄全彩图片。全幅面CCD由并行浮点寄存器、串行浮点寄存器和信号输出放大器组成。全幅面CCD曝光是由机械快门或闸门控制去保存图像,并行寄存器用于测光和读取测光值。图像投摄到作投影幕的并行阵列上。此元件接收图像信息并把它分成离散的由数目决定量化的元素。这些信息流就会由并行寄存器流向串行寄存器。此过程反复执行,直到所有的信息传输完毕。接着,系统进行精确的图像重组。 数码相机曝光的整个流程: 1. 机械快门打开,CCD曝光 2. 在CCD内部光信号转为电信号 3. 快门关闭,阻塞光线。 4. 电量传送到CCD输出口转化为信号。 5. 信号被数字化,数字资料输入内存。 6. 图像资料被进行处理,显示在LCD或电脑上。 面阵数码相机如何解决彩色图像的曝光? 1.三块CCD同时曝光的方法 第一种方法是采取了三块CCD 芯片 同时曝光的方法,它可以在一次曝光拍摄的同时,捕捉到所有的彩色信息。当光线通过 镜头 射向CCD表面的时候,由一个特制的 棱镜 式分光镜,将影像的成像光速成分射到三个不同的CCD平面。每一个CCD只记录红绿蓝色光中一种色光的彩色信息,并且只再现一种色彩,然后通过软件的对准处理,合成为一幅完整的全彩色画面。 由于人类的眼睛对于光谱绿色波段的光色最为敏感,有些数码相机在安排滤色片的时候使用两排绿滤色片来记录绿光信息,而使用第三排红色和蓝色的马赛克滤色片来分别记录红光和蓝光的信息。由于红色和蓝色信息存在间隙,这里需要由计算机采取的插值计算方法来增加附加它的彩色信息。 2.单一芯片三次曝光的拍摄方式 面阵排列数码相机捕捉彩色信息的第二种方法是“单一芯片三次曝光的拍摄方式”。采取这样的方法时,数码相机镜头的前方需要安装一个滤色片转轮,拍照时必须通过转轮中的红绿蓝三块滤色片,分别做三次单独的曝光,分别记录下红绿蓝光的彩色信息。最后 照相机 的软件将三次曝光的影像信息结合在一起,构成为全彩色的影像。 使用这样的方法时,由于是用三次曝光来记录彩色信息,显然,摄影者使用这样一台面阵的数码相机,就只能局限于拍摄静态物体。此外,由于三次拍摄条件可能出现的差异,很可能产生数码相机的软件不能适当重新组合影像的问题。特别是曝光过程中, 光源 发生的波动也都会改变影像的彩色平衡。三次曝光的数码相机可以用来拍摄动态的单色影像(包括黑白照片),这是因为在滤色片转轮上,除了三块红绿蓝滤色之外,还有一块透明的滤色片,它是用来黑白影像做单次曝光拍摄时使用的。由于只需要一次曝光,因而它可以拍摄动态物体。 3.单芯片一次曝光的拍摄方式 第三种方式是“单芯片一次曝光的拍摄方式”。在这一方式中,每一单个的像素都以两种方式覆盖着不同的红,绿,蓝色滤色片,一种是条纹覆盖法,另一种是 马赛 马克 图案交错覆盖法。有些芯片上的绿滤色片多于红色和蓝色滤色片,这是因为需要去适应人眼视觉在可见光谱中对绿色更为敏感的特点。这样,较多地使用绿色滤色片可以改善影像的分辨率。 每一个感光的像素只能捕获一种彩色,它需要从相邻的像素那里获得更多的彩色信息,这是采取插值的计算方法实现的。如果不正确的彩色信息被赋值于像素之中,那么插值的效果也会出现问题,这通常在高反差影像的边缘部分表现得最为明显, 比如 黑色的文字,常常会出现彩色的镶边。 CCD 在图像运作的三大角色: 1. 曝光,通过离散的像素将光信号变为电信号。 当入射光以光子的形式落在像素阵列上时,就获得一个图像。每一个光子相对应的能量被硅吸收就发生反应产生一个( 电子 -孔)电量组,每一个像素所能收集到的电子数,线性地取决于光亮的程度和曝光的时间,非线性的取决于波长。 2. 电量转移,在CCD内部进行电量转移。 一旦电量被集中并保持在像素的结构中,就一定会使在物理上与像素分离的侦测放大器得到电量,当一个像素的电量移动时,同时相对应的像素的电量都会移动。电量对电压的转换并输出放大 EMCCD (http://baike.baidu.com/view/276995.htm) 目录 EMCCD EMCCD的增益 特殊设计 展开 EMCCD EMCCD的增益 特殊设计 展开 编辑本段 EMCCD 目前在光子探测领域的应用发展对探测器灵敏度的要求不断提高, EMCCD (Electron-Multiplying CCD)技术对愈发严苛的需求作出了答复。 在诸如单光子探测、多维(4或5维度)活细胞显微观察、钙离子流显微观测等对极微弱的细胞产生的荧光进行快速动态成像的应用领域中,EMCCD技术提供了强大而经济的解决方案,它极高的灵敏度意味着更低的激发能量(这样可降低漂白效应)、更低的染料浓度以及更高的帧频,生命科学领域的工作者应该知道这对于他们的研究意味着什么。 即使在低扫描速率下,荧光信号的强度水平仍可能微弱到与读出噪声相当甚至更低而无法被检测出来,EMCCD的增益可显著地改善这种极微弱信号的信噪比,由此在大吞吐量的分析研究中可实现更快的采样和更短的曝光时间。 同样地,在某些物理研究领域如玻色-爱因斯坦凝聚、天文观测(包括自适应光学)以及中子星X射线观测等,都可以从这项新的探测手段中获益。 编辑本段 EMCCD的增益 EMCCD技术,有时也被称作“片上增益”技术,是一种全新的微弱光信号增强探测技术,由Andor Technology Ltd. 首先应用于他们在2001年发布的iXon系列高端超高灵敏相机上,目前有用于成像的iXon系列和光谱领域的Newton型。它与普通的科学级CCD探测器的主要区别在于其读出(转移)寄存器后又接续有一串“增益寄存器”(见图1),它的电极结构不同于转移寄存器,信号电荷在这里得到增益。 在增益寄存器中,与转移寄存器不同的是其中的一个电极被两个电极取代,其中电极1被加以适当的电压而电极2提供时钟脉冲,但该电压比仅仅转移电荷所需要高很多(约会40~60V)。在电极1与电极2间产生的电场其强度足以使电子在转移过程中产生“撞击离子化”效应,产生了新的电子,即所谓的倍增或者说是增益;每次转移的倍增倍率非常小,最多大约只有×1.01~×1.015倍,但是当如此过程重复相当多次(如陆续经过几千个增益寄存器的转移),信号就会实现可观的增益—可达1000倍以上。 目前EMCCD唯一无法取代ICCD之处是它无法实现门控,以及纳秒级门宽带来的高时间分辨率,使ICCD在对高时间分辨的动态测量领域仍是最有效的手段。 还有值得指出的一点是对于光子水平的极微弱信号探测,在所有造成信号衰减的环节上都要采取尽可能的措施来把信号损失减到最低,比如,如果探测器的入射通道只有一层入射窗,其窗口反射造成的光子损失必然比多层窗要少(见图5);但要注意的是,这必须以高真空密封性为前提,因为一般CCD芯片为防止水汽凝结及其它气体成分对其表面的损害,都覆有一层保护窗,如果不能保证高品质的真空密封,探测器制造商是不敢拿掉这层保护窗的,所以一般的探头都至少有2层窗,只有极少数制造商凭借其过硬的真空密封制造工艺将这层窗去掉而真正实现了单窗。 由此我们可见,探头的真空密封性对任何一款科学级探测器尤其是EMCCD的性能品质起着多么至关重要甚至可以说是决定性的影响。 编辑本段 特殊设计 1.双重放大器 由于动态范围有限,低照度成像设计的一个制约是不能用固定帧频既采集强信号又采集弱信号。为了保持较宽的动态范围,倍增增益可采用双重放大器设计,一个为传统放大器,其输出可拓展输出动态范围的上限,另一个为经过倍增增益的放大器,可提高探测灵敏度,二者结合使用CCD能够用于宽的动态范围照度场。 EMCCD的不足是与读出放大器相关的倍增寄存器通常设计在较高的工作效率,导致较大的读出噪声。尽管倍增增益可以克服增加的读出噪声,但影响系统的成像动态范围,因此双重放大器对EMCCD是一种有效的补偿措施。 2.背照式 EMCCD也可采用背照式结构,把高达90%的量子效率与电荷倍增向结合,提高灵敏度,从而提供高帧速率情况下最好的低照度响应。背照式EMCCD也可配置双重放大器。 3.致冷对CCD的影响 温度对片上倍增增益的影响明显。温度越低,由依次电子产生的二次电子越多,则片上倍增增益越高。研究表明把探测器制冷到-30摄氏度或更低时,片上增益可以超过1000倍。EMCCD良好的性能取决于CCD温度的最佳选择以及温度随环境波动的控制。 CCD制冷可提高器件的倍增增益,减小器件像素暗电流的产生,但也会增加赝电荷的出现。赝电荷是指当垫子进入倍增寄存器像素时,时钟波形成产生明显失真而导致凭空产生的二次电子。赝电荷随着温度降低有轻微的增加。EMCCD总的暗信号等于赝电荷加上暗电荷。 编辑本段