If drones have suddenly become the the hot topic in digital agriculture, overshadowing the introduction of autonomous tractors and field robots, then camera scanning of crops and machine processes is not far behind, and is closely allied with drone operation itself.
The digital capture of images has led to a multitude of practical computing advances with agriculture offering plenty of scope for their deployment and the software that lies behind them, and this is reflected in the silver medals awarded at this year’s Agritechnica.
Having a picture of something represented in a digital form, be it a barley crop or a brain scan, allows mathematicians to get to work on the data and create tools for recognising patterns, forms, and features within it.
At its base level, this is a highly involved and complex field of science, but as it has progressed, models and algorithms are emerging at a higher level that can be pressed into useful service by companies without having to consider such exotic concepts as Gaussian filters or Haar wavelets.
It is these upper layers we see being used in digital applications that are based on visual sensing of a field or a bag of fertiliser.
Ground rules of camera use
To help get a grip on the fundamentals of camera-based technology, two helpful conventions are worth bearing in mind.
The first is a set of four simple rules that describe what it is that a camera system is actually doing –
- If an image is fed into a computer and another, altered, image is the output, then that is image processing.
- If a digital description of an image is given to a computer and an actual image is produced, then that is computer graphics.
- If an image is fed in and a digital description of it is produced, that is computer vision.
- If a description is fed in and an altered description is outputted, that is referred to as artificial intelligence (AI).
Whether that equates to true AI or not is an argument for another day.
Opinions do vary, however, it is the term deployed to indicate that the algorithms used may not be working directly from an image, but are based on a machine-generated description of it.
An image description may be a straightforward jpeg file, or it may be the result of some sort of processing; for instance, to identify certain shapes or features within it.
The other convention is the universal golden rule that the output is only as good as the input, i.e. garbage in – garbage out.
Therefore, the quality and integrity of the image, or description, is of vital importance to the accuracy of the resulting action.
Beyond the spectrum
It is these basics which underpin the technology behind the use of cameras, and not just in the visible spectrum – they can be just as well applied to images generated at infra red or ultra violet wavelengths.
Cameras recording at these wavelengths are knowns as multispectral and can show crops distressed by disease or drought well before it becomes visible to the naked eye.
Multispectral cameras do carry an extra cost though, and are three or four times that of straight digital cameras.
Practical application
At this year’s Agritechnica, there will be many camera-based systems and applications that utilise modern computing power and programmes.
The technology gained several silver medals in the DLG awards, including the Yield EyeQ from Carl Geringhoff mbH & Co.
In this system, cameras are placed on the rear of a combine header to help optimise the header settings in difficult harvesting conditions, especially lodged crops.
The cameras and associated software identify the grains and infructescences (seed heads) that remain after the header has passed over the ground, enabling the operator to adjust the header according to what has been left uncollected.
Presently it is a stand-alone system, but the potential to integrate it into a harvester’s automated settings system is unlikely to have escaped the attention of either its developers or combine manufacturers.
Slippery slope
Another camera-reliant innovation to be introduced at Agritechnica this year is the Smart-Hill system, jointly developed by Einböck and Claas E-Systems.
This determines the gradient of a side slope during hoeing by means of a high-resolution Claas Culti Cam stereo camera, enabling the automatic correction of the hoeing implement to keep it at a 90° angle to the crop row.
The image generated by the cameras is analysed for colour, to differentiate between the soil and the crop row, while 3D surface models are also involved in determining the gradient.
Cameras for corn crushing
Three companies are presenting similar camera-based innovations for ensuring the correct processing of grains in maize silage as it passes through the harvester, and all three were awarded a silver medal.

The systems are said to use AI to analyse images of the structure of the chopped material and then determine the current grain breakdown.
There are two stages to the analysis, with the first distinguishing between grain and residual plant matter in order to subsequently measure all grain constituents.
The grain constituents are divided into two fractions, less than 4.75mm and greater than 4.75mm for the CSPS value (Corn Silage Processing Score).
A percentage ratio is then calculated, allowing adjustments to be made to the harvester in real time.
The three companies that were awarded the silver medals are: Claas, with its Cemos Auto Chopping assistance system; the ForageQualityCam from Fendt; and the ForageCam from New Holland.
No free lunch
Naturally, none of these systems come free, but prices will vary according to their type and whether any other hardware is required to action their output.
A hidden cost which is not, as yet, being widely discussed is that they all require a power source to run. The more complex the system, the greater the power input required.
Running the equivalent of a laptop or two may pale into insignificance with other power requirements of a tractor, but collectively, and over time, it all adds up.