Sensor Science

The purpose of a sensor is to extract information about a target, located in either a nearby or remote location. A well-designed sensor extracts maximum information content from the target and millimetre wave sensors have unique and powerful capabilities for this purpose. 

A mm-wave sensor collects electromagnetic radiation from a target using an antenna. The antenna is a free-space to sensor transmission line impedance matching network. It may be a lens followed by a receiver (as in a digital camera) or a simple geometry of conductors (as in a TV aerial).

The electromagnetic wave is a time and space varying transverse electric and magnetic field. A millimetre wave antenna is normally designed to collect a single component of these fields, referred to as a polarisation. This component is the horizontal linear polarisation if the antenna selects the horizontally varying component of the electric field; other linear or circular polarisations may be measured. Antennas may also be designed to collect two orthogonal polarisations simultaneously (eg horizontal and vertical linear or right-hand and left-hand circular), in this case, the antenna has two outputs.

An antenna designed to collect electromagnetic radiation will support one, or perhaps more, mode structures. These are electric and magnetic field distributions that are allowed to exist inside the antenna by the solutions of Maxwell's equations. By design the mode that is dominant is normally the lowest order mode, giving the sensor the best possible measurement spatial resolution.

As the electromagnetic radiation is a harmonically varying electric and magnetic field its capture by the antenna is recorded for processing into useful information. The harmonic electric field can either be sampled (which measures the phase and amplitude of the field) or detected (which measures the power and in doing so destroys the phase information). Detection is usually cheaper, but sampling can deliver more powerful capabilities.  A TV effectively samples the in-coming radio wave of electromagnetic radiation, whereas the human eye or a digital camera detects the optical electromagnetic radiation.

Two simple categories into which sensors fall are:

A) Active Sensors: where man-made radiation from a transmitter is radiated towards the subject and the reflected or back-scattered radiation is collected and analysed by the receiver part of the sensor. This type of technique is referred to as radar. If the transmitter and receiver are at the same physical location, this constitutes a mono-static radar, otherwise it is a bi-static radar. In the use of radar, a phenomenon referred to as speckle is observed, which in the case of imaging radar leads to a graininess in the images. This is an interference effect, associated with illuminating the subject from a source at a single point in space.   

B) Passive Sensors: where natural (Planck) radiation from the subject and its environment is collected and analysed to recognise threats on the subject. This type of technique is referred to as radiometry. Images generated using radiometry appear surprisingly similar to visible images and as such, are easy to interpret by persons or machines. 
Two categories of quantum sensors are:

A) Continuous-Variable (CV): where a photon of a signal of interest is mixed with other photon(s) in a non-linear (mixing) device and the subsequent signal processing exploits both phase and amplitude information in the output. Preservation of phase information in CV systems enables coherent signal integration to build up high signal to noise ratios in the measurement of incoming photons streams.  

B) Direct-Variable (DV): where a photon of a signal of interest is detected in a square law detector and the subsequent signal processing exploits the intensity and arrival time of the photon.  DV system are more difficult to realise in the millimetre wave band as loss of phase information (through the square law process) renders coherent integration impossible. This leaves only incoherent integration (or photon counting) possible and since the energy of a single photon (hf) is small compared to the thermal noise (kT), this can only be done if components are cryogenically cooled (T < 0.5 K)   

Two categories of sensors-target configurations are related to the shapes of the wave-fronts of the electromagnetic radiation received by the sensor. For longer-range targets, wave-fronts appear parallel (or plane) and this is referred to as the far-field (or Fraunhofer diffraction region). For closer in targets, wave-fronts are noticeably curved (or spherical) and this is referred to as the near-field (or Fresnel diffraction region). The two regimes:

1) Far-field (Fraunhofer diffraction zone) and 
2) Near field (Fresnel diffraction zone)  

have as a discriminator, a parameter referred to as the Rayleigh range. This is defined as twice the square of the sensor linear aperture size divided by the wavelength of the electromagnetic radiation.   

When beam-forming in the near-field (to localise radiation to a particular area of the target) care has to be taken that the wave-fronts are curved. The benefit of operating in the near-field is that range and three-dimensional structural information can be obtained from the wave-front curvature. In an optical imager, this can be achieved by using multiple focal plane receiver arrays. In an electronic imager, it is achieved by analogue or digital processing of the phases of the sampled electric fields of waves.

The extreme near-field: When a target is extremely close to the sensor, typically within a range of the free-space wavelength of the radiation, the performance of the sensor antenna is affected by the electrical characteristics of target. In this scenario, the target is said to be in the inductive range of the antenna.

In this extreme near-field regime, the capabilities of the near-field scanning microscope are realised. In such a "microscope" an antenna, that almost touches the target, physically moves over the target surface to build up an image of its structure. It can work passively or actively. The lateral spatial resolution is defined by the antenna geometry, so generally, a fine point is used to achieve the best possible resolution. The spatial resolution can be many times smaller than the wavelength. 

Different types of sensors can be used to reveal a variety of information about a target, leading to its recognition or identification. Imagers reveal the shapes of targets. Spectrometers reveal how the target signature varies with the electromagnetic radiation frequency. Polarimeters reveal information about the electromagnetic mode structure of the target, this being related to its shape, size, material types and orientation. In some cases, more than one type of sensor can be used on a single target. Artificial intelligence is a growing area where information from a variety of sensors can be rapidly machine-assimilated to cue appropriate action.

Made with Mobirise web creator