A Low Level API camera object wraps a pylon Device providing more convenient access to the parameters of the Camera, the Stream Grabber, the Event Grabber, and the Transport Layer using GenApi parameter classes.
The following table shows the currently available classes:
Transport Layer | Name of Class | Kind of Devices |
---|---|---|
Pylon1394 | Pylon::CBasler1394Camera | IIDC 1394 compliant cameras |
PylonGigE | Pylon::CBaslerGigECamera | GigE Vision compliant cameras |
PylonUsb | Pylon::CBaslerUsbCamera | USB3 Vision compliant cameras |
PylonCLSer | Pylon::CBaslerCameraLinkCamera | Serial Camera Link cameras |
In this document we distinguish between image acquisition, image data transfer, and image grabbing.
We denote the processes inside the camera as image acquisition. When a camera starts image acquisition, the sensor is exposed. When exposure is complete, the image data is read out from the sensor.
The acquired image data is transferred from the camera's memory to the PC using an interface such as IEEE 1394 or Gigabit Ethernet.
The process of writing the image data into the PC's main memory is referred to as "grabbing" an image.
A camera may provide different sources for image data, where each source can deliver a stream of image data. In pylon, so called Stream Grabber objects are responsible for managing the process of grabbing data from a stream, i.e., writing the data into the PC's main memory.
A Stream Grabber only grabs images from a single data stream. To grab data from multiple streams, several Stream Grabbers are needed.
The following sections describe the use of Stream Grabber objects. The order of this section reflects the sequence in which a typical grab application will use a Stream Grabber object.
Stream Grabber objects are managed by Camera objects. The number of available stream grabbers can be determined with the IPylonDevice::GetNumStreamGrabberChannels()
method of the camrea object. The IPylonDevice::GetStreamGrabber()
method returns a pointer to the Pylon::IStreamGrabber
object. Before retrieving a Stream Grabber object, the Camera object must be opened. You should check the value retuned by IPylonDevice::GetNumStreamGrabberChannels()
to see which index parameter can be used when calling IPylonDevice::GetStreamGrabber()
. The Stream Grabber object itself also must be opened before it is used. Some camera objects i.e Camera Link may not support stream grabbers and will return 0 when calling IPylonDevice::GetNumStreamGrabberChannels()
.
Example:
delete
or free
on a stream grabber pointer retrieved from a Camera object. The Camera object retains ownership of a Stream Grabber object and manages its lifetime.Independent of the transport layer used, each stream grabber provides two mandatory parameters:
A grab application must set the above two parameters before grabbing starts.
Depending on the transport layer, a Stream Grabber provides further parameters such as streaming related timeouts. All of these parameters are set to default values and image grabbing can be performed without tweaking the defaults.
There are two ways for accessing a Stream Grabber's parameters:
The most comfortable way is to use a concrete class for a Stream Grabber object. Each Camera class provides a typedef
for the corresponding Stream Grabber class. A Stream Grabber class takes ownership of a IStreamGrabber
pointer returned by the GetStreamGrabber()
method. The Stream Grabber class has members to access the Stream Grabber object's parameters.
Example:
When using the generic programming approach, i.e., using the Pylon::IPylonDevice
and Pylon::IStreamGrabber
interfaces instead of Camera and Stream Grabber classes, the IStreamGrabber::GetNodeMap()
method must be used to retrieve the GenApi node map holding the stream grabber's parameters.
Stream Grabber node maps are used in the same way as node maps for Camera objects. The use of node maps for Camera objects is described in the Accessing Parameters section.
Depending on the transport layer used for grabbing images, different system resources are required, for example:
The Stream Grabber's PrepareGrab()
method is used to allocate the needed resources.
In addition to resource allocation, the PrepareGrab()
call causes the camera object to perform a change in state. Typically, the camera parameters controlling the image size (AOI, pixel format, binning, etc.) will be read-only after PrepareGrab()
has been called. These parameters must be set up before calling PrepareGrab()
and must not be changed while image grabbing is active.
All pylon transport layers can grab image data into memory buffers allocated by a user application. The user allocated memory buffers must be registered at the Stream Grabber object. The registration step is needed for performance reasons, allowing the Stream Grabber to prepare and cache internal data structures used for dealing with user provided memory.
The buffer registration returns handles to the registered buffers, which are used in the steps following the buffer registration.
Example:
The buffer registration mechanism restricts the ownership of the buffers. Although the content of registered buffers can be changed by the user application, the application must not delete the memory of buffers that are registered. Freeing the memory is not allowed until the buffers are deregistered by using IStreamGrabber::DeregisterBuffer()
.
Each Stream Grabber maintains two different buffer queues. The buffers to be filled must be fed into the Grabber's input queue. Grabbed buffers can be retrieved from the Grabber's output queue.
The IStreamGrabber::QueueBuffer()
method is used to put a buffer into the grabber's input queue. The QueueBuffer()
method accepts two parameters, a buffer handle and an optional, user provided pointer to user provided context information. Together with the buffer, the context pointer is passed back to the user when retrieving the grabbed buffer from the grabber's output queue. A Stream Grabber never changes the memory to which the context pointer is pointing.
Example:
Queueing buffers into the stream grabber's input queue does not make the camera start acquiring images! After queueing the buffers, the stream grabber is prepared to grab data from the camera into the queued buffers. Image acquisition must be explicitly started.
To start image acquisition, use the Camera object's AcquisitionStart
parameter. AcquisitionStart
is a command parameter, i.e., calling the Execute()
method of the AcquisitionStart
parameter sends an acquisition start command to the camera.
A camera device typically provides two acquisition modes:
To be precise, the acquisition start command does not necessarily start immediate acquisition inside of the camera. When either external triggering or software triggering is enabled, the acquisition start command prepares the camera to acquire images. Actual acquisition starts when the camera senses an external trigger signal or receives a software trigger command.
When the camera's continuous acquisition mode is enabled, the AcquisitionStop
parameter is used to stop image acquisition.
Normally, a camera starts transferring the image as soon as possible after starting the acquisition, no special command to start the image transfer is needed.
Example:
The transferred image data is written to the buffer(s) in the stream grabber's input queue. When a buffer is filled with grabbed image data, the stream grabber places it into its output queue, from which it can be retrieved by the user application.
There is a wait object associated with the Stream Grabber's output queue. This wait object allows the application to wait until either a grabbed image arrives at the output queue or a timeout expires.
When the wait operation successfully returns, the grabbed buffer can be returned with the Stream Grabber object's RetrieveResult()
method. The RetrieveResult()
method fills a Pylon::GrabResult
object. The object contains, among other things, the following information:
When getting a buffer from the grabber's output queue, ownership of the buffer is given over to the application. A buffer retrieved from the output queue will never be overwritten until it is again placed into the grabber's input queue.
Remember, a buffer retrieved from the output queue must be deregistered before its memory can be freed.
We recommended using the buffer handle from the Grab Result object to requeue a buffer into the grabber's input queue.
When the camera does not send data, a buffer remains in the grabber's input queue until the Stream Grabber object's CancelGrab()
method is called. The CancelGrab()
puts all buffers from the input queue to the output queue, including any buffer currently being filled. By checking the status of a Grab Result object, you can determine whether a buffer has been canceled.
The following example shows a typical grab loop:
If the camera is set for continuous acquisition mode, acquisition should first be stopped:
If you are not sure that the grabber's input queue really is empty, the Stream Grabber object's CancelGrab() method should be issued to flush the input queue. The canceled buffers are now available at the grabber's output queue.
An application should retrieve all items from the grabber's output queue before closing a Stream Grabber object.
Before freeing their memory, deregister the buffers.
When all buffers are deregistered, call the Stream Grabber object's FinishGrab()
method to release all resources related to grabbing. FinishGrab()
must not be called when there are still buffers in the grabber's input queue!
When grabbing has been finished, a Stream Grabber object should be closed.
Example:
Here is the complete sample program for acquiring images from a GigE camera in continuous mode.
Basler GigE Vision, USB3 Vision, and IIDC 1394 cameras can send event messages. For example, when a sensor exposure has finished, the camera can send an end-of-exposure event to the PC. The event can be received by the PC before the image data for the finished exposure has been completely transferred. The retrieving and processing of event messages is described in this section.
The Grabbing Images section describes how Stream Grabber objects are used to grab images from a camera. Analogously, Event Grabber objects are used to receive event messages from a camera.
Event Grabber objects are created and returned by Camera objects.
Never try to call free
or delete
on IEventGrabber pointers. The camera object owns Event Grabbers and manages their lifetime.
Event Grabbers use internal memory buffers for receiving event messages. The number of buffers can be parametrized using the Event Grabber's NumBuffer member:
A connection to the device and all necessary resources for receiving events are allocated by calling the Event Grabber's Open() method:
To let the camera send event messages, the sending of event messages must be enabled using the Camera object.
First, the EventSelector must be set to the type of event to be enabled. In the following example the selector is set to the end-of-exposure event:
When the Event Selector is set, sending events of the desired type can be enabled by using the EventNotification parameter:
To be sure that you don't miss an event, the Event Grabber should be prepared before events are enabled (see the Creating and Preparing Event Grabbers section above).
The following code snippet illustrates how to disable the sending end-of-exposure events:
Receiving events is very similar to grabbing images. The Event Grabber provides a wait object that is signaled when an event message is available. When an event message is available, it can be retrieved by calling the Event Grabber's RetrieveEvent() method.
In contrast to grabbing images, memory buffers for receiving events need not be provided by the application. Memory buffers to store event messages are organized by the Event Grabber itself.
In typical applications, waiting for grabbed images and event messages is done in one common loop. This is demonstrated in the following code snippet:
The previous section explained how to receive an event message. This section describes how to interpret an event message.
The specific layout of event messages depends on the event type and the camera type. The pylon API uses GenICam support for parsing event messages. This means that the message layout is described in the camera's XML description file.
As described in the GenApi Node Maps section, a GenApi node map is created from the XML camera description file. That node map contains node objects representing the elements of the XML file. Since the layout of event messages is described in the camera description file, the information carried by the event messages is exposed as nodes in the node map. The camera object provides members used for accessing the event related nodes in the same way as camera parameter related nodes.
For example, an end-of-exposure event carries the following information:
Example: The camera object's Pylon::CBaslerGigECamera::ExposureEndEventFrameID member is used to access the number of the frame the event is associated with:
As described in the Accessing Parameters section, the ExposureEndEventFrameID could also be retrieved by using the camera object's node map directly:
An Event Adapter object is used to update the event related nodes of the camera object's node map. Updating the nodes is done by passing the event message to an Event Adapter.
Event Adapters are created by Camera objects:
To update the event related nodes, call the Event Adapter's DeliverMessage() method for each received event message:
It is not possible to determine whether a message contains an end-of-exposure event by passing the event message to the Event Adapter. The next section describes how node callbacks are used to get informed about the occurrence of specific events.
The previous section described how Event Adapters are used to push the content of event messages into a camera object's node map. The IEventAdapter::DeliverMessages() method updates all nodes related to events contained in the message passed in.
As described in the Getting Informed About Parameter Changes section, it is possible to register callback functions that are fired when nodes may have been changed.
These callbacks can be used to determine if an event message contains a certain event type. For example, to get informed about end-of-exposure events, a callback for one of the end-of-exposure event related nodes must be installed. The following code snippet illustrates how to install a callback function for the ExposureEndFrameId node:
The registered callback will be fired from the context of the IEventAdapter::DeliverMessage() function.
Before closing and deleting the Camera object, the event related objects must be closed and destroyed as illustrated in the following code snippet:
Basler Cameras can send additional information appended to the image data, such as frame counters, time stamps, and CRC checksums. This section explains how to enable Chunk Features and how to access the added data.
Before a feature producing a chunk can be activated, the camera's chunk mode must be activated:
When the camera is in chunk mode, it transfers data blocks that are partitioned into chunks. The first chunk is always the image data. When chunk features are enabled, the image data chunk is followed by chunks containing the information generated by the chunk features.
Once the chunk mode is activated, chunk features can be enabled:
Grabbing from an image stream with chunks is very similar to grabbing from an image stream without chunks. Memory buffers must be provided that are large enough to store both the image data and the added chunk data.
The camera's PayloadSize
parameter reports the necessary buffersize (in bytes):
Now an image plus added chunks can be grabbed:
The data block containing the image chunk and the other chunks has a self-descriptive layout. Before accessing the data in the added chunks, the data block must be parsed by a Chunk Parser object.
The Camera object is responsible for creating a Chunk Parser:
Once a Chunk Parser is created, grabbed buffers can be attached to the Chunk Parser. When a buffer is attached to a chunk parser, it is parsed and the chunk data access is provided by members of the Camera object.
To check the result of the CRC Checksum chunk feature, use the Chunk Parser's HasCRC()
and CheckCRC()
methods. Note that the camera only sends a CRC when the CRC Checksum feature is enabled.
Before reusing a buffer for grabbing, the buffer must be detached from the Chunk Parser.
After detaching a buffer, the next grabbed buffer can be attached and the included chunk data can be read.
When you have finished grabbing, the Chunk Parser must be deleted:
Callback functions can be installed that are triggered when a Camera device has been removed. As soon as the Camera object's Open()
method has been called, either a C or C++ class member function can be installed as callbacks.
Installing a C function:
Installing a C++ class member function:
All registered callbacks must be deregistered before calling the Camera object's Close()
method.