All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
Advanced Topics

The advanced topics section can be consulted if more information about special use cases is required.

Architecture of pylon

This section gives a short introduction to the most important concepts of the pylon C++ API.

pylon3_0_cpp_api.png
The pylon C++ API

Transport Layers

The term 'transport layer' is used as an abstraction for a physical interface such as IEEE 1394, GigE or Camera Link. For each of these interfaces, there are drivers providing access to camera devices. pylon currently includes four different transport layers:

Transport Layer objects are device factories and are used to:

Transport Layer Factory

An application program does not access transport layer implementations directly. The Transport Layer Factory is used to create Transport Layer objects, each of which represents a transport layer. Additionally, the Transport Layer Factory can be used as a device factory to create and destroy pylon Devices for all transport layers.

Low Level API pylon Devices

In pylon, physical camera devices are represented by pylon Devices. pylon Devices are only directly used when programming against the Low Level API.

Instant Camera Classes

An Instant Camera provides convenient access to a camera device while being highly customizable. It allows to grab images with few lines of code providing instant access to grabbed images from a camera device. Internally a pylon Device is used. A pylon Device needs to be created and attached to the Instant Camera object for operation. Additional Device Specific Instant Camera classes provide more convenient access to the parameters of the camera. Furthermore, the Instant Camera Array classes eases programming for image grabbing from multiple camera devices.

GenApi Node Maps

For camera configuration and for accessing other parameters, the pylon API uses the technologies defined by the GenICam standard hosted by the European Machine Vision Association (EMVA). The GenICam specification (http://www.GenICam.org) defines a format for camera description files. These files describe the configuration interface of GenICam compliant cameras. The description files are written in XML (eXtensible Markup Language) and describe camera registers, their interdependencies, and all other information needed to access high-level features such as Gain, Exposure Time, or Image Format by means of low level register read and write operations.

The elements of a camera description file are represented as software objects called Nodes. For example, a node can represent a single camera register, a camera parameter such as Gain, a set of available parameter values, etc. Each node implements the GenApi::INode interface.

The nodes are linked together by different relationships as explained in the GenApi standard document available at www.GenICam.org. The complete set of nodes is stored in a data structure called node map. At runtime, a node map is instantiated from an XML description.

Using the code generators provided by GenICam's GenApi module, a programming interface is created from a camera description file. This interface is represented as a parameter class, that has a member for each camera parameter. The members are references (or handles) to the GenApi Nodes representing the parameters. Such a parameter class can represent the parameters of a camera device for a certain transport layer, e.g. GigE.

In pylon, node maps are not only used to represent camera device parameters. Parameters of other pylon objects such as Transport Layer objects or the Image Format Converter are also exposed via GenApi node maps.

Examples:

Image Handling Support

Besides the Instant Camera classes used for grabbing images pylon offers additional Image Handling Support support for handling grabbed images. There are an image class, an image format converter, and the loading and saving of images.

Low Level API

Note
The Low Level API should only be used for existing applications and for rare highly advanced use cases, that cannot be covered using the Instant Camera classes. Please use the Instant Camera classes instead of the Low Level API.

Camera Classes

A Low Level API camera object wraps a pylon Device and provides more convenient access to the parameters of the camera, the stream grabber, the event grabber, and Transport Layer using GenApi parameter classes. The low level camera classes are replaced by the Device Specific Instant Camera classes.

Stream Grabbers

The pylon architecture allows a camera object to deliver one or more streams of image data. To grab images from a stream, a Stream Grabber object is required. Stream Grabber objects cannot be created directly by an application. They are managed by Camera objects, which create and pass out Stream Grabbers. All Stream Grabbers implement the Pylon::IStreamGrabber interface. This means that for all transport layers, images are grabbed from streams in exactly the same way. More details about grabbing images are described below in the Grabbing Images section for the Low Level API.

Event Grabbers

Basler GigE Vision, USB3 Vision, and IIDC 1394 cameras can send event messages. Event Grabber objects are used to receive event messages. How to retrieve and process event messages is described below in the Handling Camera Events section for the Low Level API.

Chunk Parsers

If the so-called Chunk Mode is activated, Basler Cameras can send additional information appended to the image data. When in Chunk Mode, the camera sends an extended data stream including the image data combined with added information such as a frame number or a time stamp. The extended data stream is self-descriptive. pylon Chunk Parser objects are used for parsing the extended data stream and for providing access to the added information. Chunk Parser objects are described in the Chunk Parser: Accessing Chunk Features section for the Low Level API.

Settings for Building Applications with pylon

As described in Building Applications with pylon, you can use the pylon-config utility to get all the parameters required to build applications based on pylon.

Required Linker Options

Executables using the pylon API must be linked using the -Wl,-E compiler option. This option is automatically provided by pylon-config --libs. It ensures that run-time type information for pylon-defined types is processed properly. This is essential for proper exception handling and dynamic casting of objects.

Locating dependencies using RPATH

At runtime, the dynamic linker must know where it can find the pylon libraries. In pylon versions prior to 5.0 this was handled by extending the environment variable LD_LIBRARY_PATH. This works, but it is a rather cumbersome and error prone.

From pylon 5 upwards, we use the rpath feature of the GNU linker to specify the runtime location of pylon. You can find more details about rpath and runpath at https://en.wikipedia.org/wiki/Rpath.

There are two cases to distinguish when you want to build an application with rpath support:

Note
These snippets assume, that the environment variable PYLON_ROOT points to your pylon installation (e.g. /opt/pylon5).

Debugging pylon Applications Using GigE Cameras

When debugging a pylon application using GigE cameras you may encounter heartbeat timeouts. The application must send special network packets to the camera in defined intervals. If the camera doesn't receive these heartbeats it will consider the connection as broken and won't accept any commands from the application.

When you run your application pylon will normally generate these heartbeats. When you set a breakpoint in your application and the breakpoint is hit, the debugger will suspend all threads including the one sending the heartbeats. So when you debug your application and single step through your code no heartbeats are sent to the camera.

To work around this you have to extend the heartbeat timeout during development. You can do this by setting an environment variable named PYLON_GIGE_HEARTBEAT which will instruct pylon to set the heartbeat interval when opening the camera or you can set the heartbeat timeout value named "HeartbeatTimeout" of the camera transport layer in your code. To set the environment variable set an environment variable named PYLON_GIGE_HEARTBEAT and set its value to the desired timeout in milliseconds. To set the Heartbeat timeout in code set the value on the HeartbeatTimeout node of the transport layer:

// retrieve the heartbeat node from the transport layer node map
GenApi::CIntegerPtr pHeartbeat = pCamera->GetTLNodeMap()->GetNode("HeartbeatTimeout");
// set heartbeat to 600 seconds. (Note: Only GigE cameras have a "HeartbeatTimeout" node)
if (pHeartbeat != NULL ) pHeartbeat->SetValue(600*1000);
Note
When you set the heartbeat to a high value and stop your application without closing the device properly by calling the Close function you won't be able to open the camera again and will receive an error stating the device is currently in use. This can happen if you stop your application using the debugger. To open the camera again you must either wait until the timeout elapses or disconnect the network cable from the camera.
The pylon GigE transport layer automatically sets the heartbeat timeout to 5 minutes when creating a device if running under a debugger. This can be overridden by setting the PYLON_GIGE_HEARTBEAT environment variable. We recommend not to rely on the default mechanism but to explicitly specify the heartbeat timeout by setting the environment variable or by setting an appropriate heartbeat timeout in the application.

Enumerating and Creating pylon Devices

pylon offers two ways to enumerate and create pylon Devices. The first approach uses the Transport Layer Factory to enumerate cameras across multiple transport layers. The second approach lets a Transport Layer object enumerate and create pylon Devices for a specific transport layer. Before describing the different enumeration schemes, the terms Device Class and Device Info object are introduced.

Device Classes

Each transport layer can create a specific type of pylon Device. For example, the PylonGigE transport layer will create pylon Devices representing GigE Vision cameras. Each type of device is associated with a unique identifier string called Device Class. The device class identifier can be found in the DeviceClass.h header file.

Device Info Objects

The device enumeration procedure returns a list of Device Info objects. The base class for Device Info objects is Pylon::CDeviceInfo. A Device Info object uniquely describes a camera device. Device Info objects are used by a Transport Layer and the Transport Layer Factory to create camera objects representing the device described by the Device Info objects.

A Pylon::CDeviceInfo object stores a set of string properties. The data type of the values is Pylon::String_t. The following properties are available for all Device Info Objects:

NameDescription
FriendlyName A human readable name for the device (e.g. the camera's model name). Friendly names are not unique.
FullName A unique identifier for the device. No two devices will have the same full name.
VendorName The name of the vendor.
DeviceClass Each transport layer can create a specific type (or class) of camera devices (e.g. IIDC 1394 or GigE Vision devices). The device types are identified by the Device Class property.
SerialNumber The device's serial number. The availability of the device serial number is not guaranteed during the enumeration process, so the Serial Number Property may be undefined.
UserDefinedName For some device classes, it is possible to assign a user defined name to a camera device. The value of this property is not necessarily unique.
DeviceFactory The unique full name of the Transport Layer object that can create the device.

In addition, specific transport layers will require additional properties. These properties can be accessed in a generic way by using the Pylon::IProperties interface.

A more comfortable way is downcasting a Device Info object to the concrete class. This is illustrated in the following example, which prints out the IP address of a GigE Device Info object:

CBaslerGigECamera::DeviceInfo_t& GigEDeviceInfo =
static_cast<CBaslerGigECamera::DeviceInfo_t&>(DeviceInfo);
cout << GigEDeviceInfo.GetIpAddress() << endl;

Using the Transport Layer Factory for Enumerating Cameras

The Pylon::CTlFactory::EnumerateDevices() method is used to retrieve a list of all available devices, regardless of which transport layer is used to access the device. The list contains Device Info objects that must be used for creating Camera objects.

The returned lists are of the Pylon::DeviceInfoList_t type and are used similarly to the Standard Template Library (STL) std::vector class.

The following example prints out the unique names of all connected devices:

#include <ostream>
using namespace Pylon;
using namespace std;
int main()
{
PylonAutoInitTerm autoInitTerm;
DeviceInfoList_t lstDevices;
TlFactory.EnumerateDevices( lstDevices );
if ( ! lstDevices.empty() ) {
DeviceInfoList_t::const_iterator it;
for ( it = lstDevices.begin(); it != lstDevices.end(); ++it )
cout << it->GetFullName();
}
else
cerr << "No devices found!" << endl;
return 0;
}

The Transport Layer Factory provides a Device Info object and can be used to create Camera objects. The following example illustrates how to create a Camera object for the first element in the device list:

IPylonDevice *pDevice = TlFactory.CreateDevice( lstDevices[0] );
Attention
Never call free or delete on a Pylon::IPylonDevice pointer created by the Transport Layer Factory. Instead, use the Pylon::CTlFactory::DestroyDevice() method to delete an IPylonDevice pointer.

Using the Transport Layer Factory to Create a Transport Layer

A list of all available transport layers can be retrieved by calling the Pylon::CTlFactory::EnumerateTls() method. This method fills a list with Transport Layer Info objects (Pylon::CTlInfo). The data structures are very similar to Device Info objects. Transport Layer Info objects are used as arguments for the Pylon::CTlFactory::CreateTl() method that creates Transport Layer objects. The method returns a pointer of the Pylon::ITransportLayer type.

Attention
Never call free or delete on a ITransportLayer pointer created by the Transport Layer Factory. Instead, use the Pylon::CTlFactory::ReleaseTl() method to free Transport Layer objects.

Using a Transport Layer Object for Enumerating Cameras

Transport Layer objects can be used to enumerate all devices accessible by a specific transport layer. Transport Layer objects are created by the Transport Layer Factory. This is illustrated in the following example, which creates a Transport Layer object for the PylonGigE transport layer:

using namespace Pylon;
int main()
{
PylonAutoInitTerm autoInitTerm;
return 0;
}

As described above, Transport Layer objects can also be created by passing in a Transport Layer Info object.

The Transport Layer Object is now used for enumerating all of the devices it can access:

DeviceInfoList_t lstDevices;
pTl->EnumerateDevices( lstDevices );
if ( lstDevices.empty() ) {
cerr << "No devices found" << endl;
exit(1);
}

Pylon::ITransportLayer::EnumerateDevices adds the discovered devices to the passed-in Device Info List.

The Transport Layer object is now used for creating a Camera object. In the following example, a Camera Object for the first enumerated camera device is created:

IPylonDevice* pDevice = pTl->CreateDevice( lstDevices[0] );
Attention
Never call free or delete on a Pylon::IPylonDevice pointer created by the Transport Layer Factory. Instead, use the Pylon::CTlFactory::DestroyDevice() method to delete an IPylonDevice pointer.

Applying a Filter when Enumerating Cameras

For enumerating a range of devices that have certain properties the EnumerateDevices method applying a filter can be used. To define the properties a filter list with device info objects can be passed. A camera is enumerated if it has the properties of at least one device info object in the filter list. The following example enumerates all cameras with the model names in the filter list.

#include <ostream>
using namespace Pylon;
using namespace std;
int main()
{
PylonAutoInitTerm autoInitTerm;
filter.push_back( CDeviceInfo().SetModelName( "SCA750-60FC"));
filter.push_back( CDeviceInfo().SetModelName( "SCA780-54FC"));
DeviceInfoList_t lstDevices;
TlFactory.EnumerateDevices( lstDevices, filter );
if ( ! lstDevices.empty() ) {
DeviceInfoList_t::const_iterator it;
for ( it = lstDevices.begin(); it != lstDevices.end(); ++it )
cout << it->GetFullName();
}
else
cerr << "No devices found!" << endl;
return 0;
}

Creating Specific Cameras

For creating a specific device an info object must be set up with the properties of the desired device. In the following example the serial number and the device class are used for identifying the camera. Specifying the device class limits the search to the correct transport layer. This saves computation time when using the transport layer factory.

CTlFactory& TlFactory = CTlFactory::GetInstance();
CDeviceInfo di;
di.SetSerialNumber( "20399956");
di.SetDeviceClass( Basler1394DeviceClass);
IPylonDevice* device = TlFactory.CreateDevice( di);

The above example can also be written in one line:

IPylonDevice* device = CTlFactory::GetInstance().CreateDevice( CDeviceInfo().SetDeviceClass( Basler1394DeviceClass).SetSerialNumber( "20399956"));

The CreateDevice method will fail when multiple devices match the provided properties. If it is required to create any one of multiple devices the CreateFirstDevice method can be used.

The following sample illustrates how to create a device object for a GigE camera with a specific IP address:

//....
CTlFactory& TlFactory = CTlFactory::GetInstance();
CBaslerGigEDeviceInfo di;
di.SetIpAddress( "192.168.0.101");
IPylonDevice* device = TlFactory.CreateDevice( di);

Grab Strategies

The following grab strategies involve the triggering of the camera device. Depending on the cofiguration of the camera device the following trigger modes are supported.

Additional information regarding this topic can be found in the code sample Grab_Strategies and in the parameter documentation of the Instant Camera .

One by One Grab Strategy

pylon_buffer_flow_one_by_one.png
The Buffer Flow Using the One by One Grab Strategy

When the One by One grab strategy is used images are processed in the order of their acquisition.

Latest Image Only Grab Strategy

pylon_buffer_flow_latest.png
The Buffer Flow Using the Latest Image Only Grab Strategy

The Latest Image Only grab strategy differs from the One By One grab strategy by the size of the Output Queue. The size of the output queue is only 1 buffer. If a new buffer has been grabbed and there is already a buffer waiting in the Output Queue then the buffer waiting in the output queue is automatically returned to the Empty Buffer Queue (4.1). The newly filled buffer is then placed into the output queue. This assures that always the latest grabbed image is provided to the application. Images that are automatically returned to the Empty Buffer Queue are called skipped images.

Latest Images Strategy

pylon_buffer_flow_latest.png
The Buffer Flow Using the Latest Images Grab Strategy

The Latest Images strategy extends the above strategies. It allows the user to adjust the size of Output Queue by setting CInstantCamera::OutputQueueSize . If a new buffer has been grabbed and the output queue is full, the first buffer waiting in the output queue is automatically returned to the Empty Buffer Queue (4.1). The newly filled buffer is then placed into the output queue. This ensures that the application is always provided with the latest grabbed images. Images that are automatically returned to the Empty Buffer Queue are called skipped images. When setting the output queue size to 1, this strategy is equivalent to Latest Image Only grab strategy. When setting the output queue size to CInstantCamera::MaxNumBuffer, this strategy is equivalent to One By One grab strategy.

Upcoming Image Grab Strategy

pylon_buffer_flow_upcoming.png
The Buffer Flow Using the Upcoming Image Grab Strategy

The Upcoming Image grab strategy can be used to make sure to get an image that has been grabbed after RetrieveResult() has been called.

Note
The grab strategy Upcoming Image cannot be used together with USB camera devices. See section Differences in Image Transport and the following section for more information.

Getting Informed About Camera Device Removal

To get informed about camera device removal the IsCameraDeviceRemoved() method can be queried or a configuration event handler can be registered. The virtual OnCameraDeviceRemoved() method is called if a camera device is removed. The device removal is only detected while the Instant Camera and therefore the attached pylon Device are open. The attached pylon Device needs to be destroyed after a device removal. This can be done using the DestroyDevice() method.

The following is an example of a configuration event handler that is handling camera device removal:

//Example of a configuration event handler that handles device removal events.
class CSampleConfigurationEventHandler : public Pylon::CConfigurationEventHandler
{
public:
// This method is called from a different thread when the camera device removal has been detected.
void OnCameraDeviceRemoved( CInstantCamera& /*camera*/)
{
cout << "CSampleConfigurationEventHandler::OnCameraDeviceRemoved called." << std::endl;
}
};

The following example shows how a device removal is detected while the camera is accessed in a loop. The IsCameraDeviceRemoved() method can be used to check whether the removal of the camera device has caused an exception while accessing the camera device, e.g. for grabbing.

// Declare a local counter used for waiting.
int loopCount = 0;
// Get the transport layer factory.
CTlFactory& tlFactory = CTlFactory::GetInstance();
// Create an instant camera object with the camera device found first.
CInstantCamera camera( tlFactory.CreateFirstDevice());
// Print the camera information.
cout << "Using device " << camera.GetDeviceInfo().GetModelName() << endl;
cout << "Friendly Name: " << camera.GetDeviceInfo().GetFriendlyName() << endl;
cout << "Full Name : " << camera.GetDeviceInfo().GetFullName() << endl;
cout << "SerialNumber : " << camera.GetDeviceInfo().GetSerialNumber() << endl;
cout << endl;
// For demonstration purposes only, register another configuration event handler that handles device removal.
camera.RegisterConfiguration( new CSampleConfigurationEventHandler, RegistrationMode_Append, Cleanup_Delete);
// For demonstration purposes only, add a sample configuration event handler to print out information
// about camera use.
camera.RegisterConfiguration( new CConfigurationEventPrinter, RegistrationMode_Append, Cleanup_Delete);
// Open the camera. Camera device removal is only detected while the camera is open.
camera.Open();
// Now, try to detect that the camera has been removed:
// Ask the user to disconnect a device
loopCount = c_loopCounterInitialValue;
cout << endl << "Please disconnect the device (timeout " << loopCount / 4 << "s) " << endl;
try
{
// Get a camera parameter using generic parameter access.
GenApi::CIntegerPtr width(camera.GetNodeMap().GetNode("Width"));
// The following loop accesses the camera. It could also be a loop that is
// grabbing images. The device removal is handled in the exception handler.
while ( loopCount > 0)
{
// Print a "." every few seconds to tell the user we're waiting for the callback.
if (--loopCount % 4 == 0)
{
cout << ".";
cout.flush();
}
WaitObject::Sleep(250);
// Change the width value in the camera depending on the loop counter.
// Any access to the camera like setting parameters or grabbing images
// will fail throwing an exception if the camera has been disconnected.
width->SetValue( width->GetMax() - (width->GetInc() * (loopCount % 2)));
}
}
catch (const GenericException &e)
{
if ( camera.IsCameraDeviceRemoved())
{
// The camera device has been removed. This caused the exception.
cout << endl;
cout << "The camera has been removed from the PC." << endl;
cout << "The camera device removal triggered an exception:" << endl
<< e.GetDescription() << endl;
}
else
{
// An unexpected error has occurred.
// In this example it is handled by exiting the program.
throw;
}
}
if ( !camera.IsCameraDeviceRemoved())
cout << endl << "Timeout expired" << endl;
// Destroy the Pylon Device representing the detached camera device.
// It cannot be used anymore.
camera.DestroyDevice();

The above code snippets can be found in the code of the DeviceRemovalHandling sample.

Note
The OnCameraDeviceRemoved call is made from a separate thread.

Accessing Chunk Features

Basler Cameras can send additional information appended to the image data, such as frame counters, time stamps, and CRC checksums. Data chunks are automatically parsed by the Instant Camera class if activated. The following example shows how to do this using a Device Specific Instant Camera class.

// Enable chunks in general.
if (GenApi::IsWritable(camera.ChunkModeActive))
{
camera.ChunkModeActive.SetValue(true);
}
else
{
throw RUNTIME_EXCEPTION( "The camera doesn't support chunk features");
}
// Enable time stamp chunks.
camera.ChunkSelector.SetValue(ChunkSelector_Timestamp);
camera.ChunkEnable.SetValue(true);
#ifndef USE_USB // USB camera devices provide generic counters. An explicit FrameCounter value is not provided by USB camera devices.
// Enable frame counter chunks.
camera.ChunkSelector.SetValue(ChunkSelector_Framecounter);
camera.ChunkEnable.SetValue(true);
#endif
// Enable CRC checksum chunks.
camera.ChunkSelector.SetValue(ChunkSelector_PayloadCRC16);
camera.ChunkEnable.SetValue(true);

The chunk data can be accessed via parameter members of the device specific grab result data class or using the provided chunk data node map (not shown).

// Camera.StopGrabbing() is called automatically by the RetrieveResult() method
// when c_countOfImagesToGrab images have been retrieved.
while( camera.IsGrabbing())
{
// Wait for an image and then retrieve it. A timeout of 5000 ms is used.
// RetrieveResult calls the image event handler's OnImageGrabbed method.
camera.RetrieveResult( 5000, ptrGrabResult, TimeoutHandling_ThrowException);
#ifdef PYLON_WIN_BUILD
// Display the image
Pylon::DisplayImage(1, ptrGrabResult);
#endif
cout << "GrabSucceeded: " << ptrGrabResult->GrabSucceeded() << endl;
// The result data is automatically filled with received chunk data.
// (Note: This is not the case when using the low-level API)
cout << "SizeX: " << ptrGrabResult->GetWidth() << endl;
cout << "SizeY: " << ptrGrabResult->GetHeight() << endl;
const uint8_t *pImageBuffer = (uint8_t *) ptrGrabResult->GetBuffer();
cout << "Gray value of first pixel: " << (uint32_t) pImageBuffer[0] << endl;
// Check to see if a buffer containing chunk data has been received.
if (PayloadType_ChunkData != ptrGrabResult->GetPayloadType())
{
throw RUNTIME_EXCEPTION( "Unexpected payload type received.");
}
// Since we have activated the CRC Checksum feature, we can check
// the integrity of the buffer first.
// Note: Enabling the CRC Checksum feature is not a prerequisite for using
// chunks. Chunks can also be handled when the CRC Checksum feature is deactivated.
if (ptrGrabResult->HasCRC() && ptrGrabResult->CheckCRC() == false)
{
throw RUNTIME_EXCEPTION( "Image was damaged!");
}
// Access the chunk data attached to the result.
// Before accessing the chunk data, you should check to see
// if the chunk is readable. When it is readable, the buffer
// contains the requested chunk data.
if (IsReadable(ptrGrabResult->ChunkTimestamp))
cout << "TimeStamp (Result): " << ptrGrabResult->ChunkTimestamp.GetValue() << endl;
#ifndef USE_USB // USB camera devices provide generic counters. An explicit FrameCounter value is not provided by USB camera devices.
if (IsReadable(ptrGrabResult->ChunkFramecounter))
cout << "FrameCounter (Result): " << ptrGrabResult->ChunkFramecounter.GetValue() << endl;
#endif
cout << endl;
}

The above code snippets can be found in the code of the Grab_ChunkImage sample.

Handling Camera Events

Basler GigE Vision, USB3 Vision, and IIDC 1394 cameras can send event messages. For example, when a sensor exposure has finished, the camera can send an Exposure End event to the PC. The event can be received by the PC before the image data for the finished exposure has been completely transferred. This is e.g. useful for avoiding unnecessary delay by moving an imaged object further only before the related image data transfer is complete.

The event messages are automatically retrieved and processed by the InstantCamera classes. The information carried by event messages is exposed as nodes in the camera node map and can be accessed like "normal" camera parameters. These nodes are updated when a camera event is received. You can register camera event handler objects that are triggered when event data has been received.

The following camera event handler is used in the camera event example below, which prints the event data on the screen.

// Example handler for camera events.
class CSampleCameraEventHandler : public CameraEventHandler_t
{
public:
// Only very short processing tasks should be performed by this method. Otherwise, the event notification will block the
// processing of images.
virtual void OnCameraEvent( Camera_t& camera, intptr_t userProvidedId, GenApi::INode* /* pNode */)
{
std::cout << std::endl;
switch ( userProvidedId )
{
case eMyExposureEndEvent: // Exposure End event
cout << "Exposure End event. FrameID: " << camera.ExposureEndEventFrameID.GetValue() << " Timestamp: " << camera.ExposureEndEventTimestamp.GetValue() << std::endl << std::endl;
break;
case eMyEventOverrunEvent: // Event Overrun event
cout << "Event Overrun event. FrameID: " << camera.EventOverrunEventFrameID.GetValue() << " Timestamp: " << camera.EventOverrunEventTimestamp.GetValue() << std::endl << std::endl;
break;
}
}
};

Handling camera events is disabled by default and needs to be activated first:

// Camera event processing must be activated first, the default is off.
camera.GrabCameraEvents = true;

To register a camera event handler the name of the event data node updated on a camera event and a user provided ID need to be passed. The user provided ID can be used to distinguish different events handled by the same event handler.

//Enumeration used for distinguishing different events.
enum MyEvents
{
eMyExposureEndEvent = 100,
eMyEventOverrunEvent = 200
};
...
// Register an event handler for the Exposure End event. For each event type, there is a "data" node
// representing the event. The actual data that is carried by the event is held by child nodes of the
// data node. In the case of the Exposure End event, the child nodes are ExposureEndEventFrameID, ExposureEndEventTimestamp,
// and ExposureEndEventStreamChannelIndex. The CSampleCameraEventHandler demonstrates how to access the child nodes within
// a callback that is fired for the parent data node.
camera.RegisterCameraEventHandler( pHandler1, "ExposureEndEventData", eMyExposureEndEvent, RegistrationMode_ReplaceAll, Cleanup_None);
// Register the same handler for a second event. The user-provided ID can be used
// to distinguish between the events.
camera.RegisterCameraEventHandler( pHandler1, "EventOverrunEventData", eMyEventOverrunEvent, RegistrationMode_Append, Cleanup_None);

The event of interest must be enabled in the camera. Events are then handled in the RetrieveResult() call while waiting for images.

// Enable sending of Exposure End events.
// Select the event to receive.
camera.EventSelector.SetValue(EventSelector_ExposureEnd);
// Enable it.
camera.EventNotification.SetValue(EventNotification_GenICamEvent);
// Enable sending of Event Overrun events.
camera.EventSelector.SetValue(EventSelector_EventOverrun);
camera.EventNotification.SetValue(EventNotification_GenICamEvent);
// Start the grabbing of c_countOfImagesToGrab images.
camera.StartGrabbing( c_countOfImagesToGrab);
// This smart pointer will receive the grab result data.
CGrabResultPtr ptrGrabResult;
// Camera.StopGrabbing() is called automatically by the RetrieveResult() method
// when c_countOfImagesToGrab images have been retrieved.
while ( camera.IsGrabbing())
{
// Execute the software trigger. Wait up to 1000 ms for the camera to be ready for trigger.
if ( camera.WaitForFrameTriggerReady( 1000, TimeoutHandling_ThrowException))
{
camera.ExecuteSoftwareTrigger();
}
// Retrieve grab results and notify the camera event and image event handlers.
camera.RetrieveResult( 5000, ptrGrabResult, TimeoutHandling_ThrowException);
// Nothing to do here with the grab result, the grab results are handled by the registered event handler.
}

The above code snippets can be found in the code of the Grab_CameraEvents sample.

Getting Informed About Parameter Changes

The GenICam API provides the functionality for installing callback functions that will be called when a parameter's value or state (e.g. the access mode or value range) have been changed. It is possible to either install a C function or a C++ class member function as a callback.

Each callback is installed for a specific parameter. If the parameter itself has been touched or if another parameter that can influence the state of the parameter has been changed, the callback will be fired.

The following example demonstrates how to install callbacks for the Width parameter:

#include <pylon/gige/BaslerGigEInstantcamera.h>
#include <ostream>
using namespace Pylon;
using namespace std;
// C callback function
void staticcallback(GenApi::INode* pNode )
{
cout << "Perhaps the value or state of " << pNode->GetName() << "has changed." << endl;
if ( GenApi::IsReadable( pNode ) ) {
GenApi::CValuePtr ptrValue( pNode );
cout << "The current value is " << ptrValue->ToString() << endl;
}
}
class C
{
public:
// Member function as callback function
void membercallback(GenApi::INode* pNode )
{
cout << "Perhaps the value or state of " << pNode->GetName() << "has changed." << endl;
if ( GenApi::IsReadable( pNode ) ) {
GenApi::CValuePtr ptrValue( pNode );
cout << "The current value is " << ptrValue->ToString() << endl;
}
}
};
int main()
{
PylonAutoInitTerm autoInitTerm;
C cb; // c.membercallback() will be installed as callback
// Only look for cameras supported by Camera_t.
info.SetDeviceClass( Camera_t::DeviceClass());
// Create an instant camera object with the first found camera device matching the specified device class.
CBaslerGigEInstantCamera_t Camera( CTlFactory::GetInstance().CreateFirstDevice( info));
Camera.Open();
// Install the C-function as callback
GenApi::Register( Camera.Width.GetNode(), &staticcallback );
// Install a member function as callback
GenApi::Register( Camera.Width.GetNode(), cb, &C::membercallback );
// This will trigger the callback functions
Camera.Width.SetValue( 128 );
// Uninstall the callback functions
Camera.Width.GetNode()->DeregisterCallback(h2);
Camera.Width.GetNode()->DeregisterCallback(h1);
// Close the camera object
Camera.Close();
}
Note
For the nodes of the camera node map a Camera Event Handler can alternatively be used to get informed about parameter changes. This is because a GenApi node call back is registered internally for the node identified by the node's name when a Camera Event handler is registered. This callback triggers a call to the CCameraEventHandler::OnCameraEvent() method. Using a Camera Event Handler can be more convenient. See the Grab_CameraEvents sample for more information about how to register a Camera Event Handler.

Instant Camera Class and User Provided Buffers

A buffer factory can be attached to an Instant Camera object for using user provided buffers. The use of a buffer factory is optional and intended for advanced use cases only. The buffer factory class must be derived from Pylon::IBufferFactory. An instance of a buffer factory object can be attached to an instance the Instant Camera class by calling SetBufferFactory() . Buffers are allocated when StartGrabbing is called. A buffer factory must not be deleted while it is attached to the camera object and it must not be deleted until the last buffer is freed. To free all buffers the grab needs to be stopped and all grab results must be released or destroyed. The Grab_UsingBufferFactory code sample illustrates the use of a buffer factory.

GigE Multicast/Broadcast: Grab Images of One Camera on Multiple PCs

Basler GigE cameras can be configured to send the image data stream to multiple destinations. Either IP multicasts or IP broadcasts can be used.

The Controlling Application and the Monitoring Application

When multiple applications on different PCs expect to receive data streams from the same camera, one application is responsible for configuring the camera and for starting and stopping the data acquisition. This application is called the controlling application. Other applications that also expect to receive the data stream are called monitoring applications. These applications must connect to the camera in read-only mode, and can read all camera parameters but can not change them.

Device enumeration and device creation is identical for the controlling and the monitoring application. Each application type must create a Camera Object for the camera device from which it will recieve data. The multicast device creation is realized in the same way as for unicast setups (see earlier explanation).

Example of the configuration of an Instant Camera to act as monitor:

Camera.MonitorModeActive = true;

When using the Low Level API, the parameters passed to the Camera Object's Open() method determine whether an application acts as controlling or as monitoring application. The following code snippet illustrates how a monitoring application must call the Open() method:

// Low Level-API only
// Open the camera in stream mode to receive multicast packets (monitoring mode)
// In this mode the camera must be controlled by another application that must be in controlling mode
Camera.Open(Stream);

When using the low level API the controlling application can either call the Open() method without passing in any arguments (the default parameters for the Open() method make sure that the device will be opened in control and stream mode), or can specify the access mode for the Open() method explicitly:

// Open the camera in controlling mode but without setting the Exclusive flag for the access mode
Camera.Open(Stream | Control);

It is important that the controlling application does not set the Exclusive flag for the access mode. Using the Exclusive flag would prevent monitoring applications from accessing the camera at all. When the controlling application also wants to receive camera events, the Events flag must be added to the access mode parameter.

The controlling application and the monitoring application must create Stream Grabber objects in the same way as is done in unicast setups. Configuring the Stream Grabber for multicasts or broadcasts is explained in the next sections.

Setting Up the Controlling Application for Enabling Multicast and Broadcast

The TransmissionType parameter of the GigE Stream Grabber class can be used to configure whether the camera sends the data stream to a single destination or to multiple destinations.

When the camera sends the image data using limited broadcasts, where the camera sends the data to the address 255.255.255.255, the data is sent to all devices in the local network. 'Limited' means, that the data is not sent to destinations behind a router, e.g. to computers in the internet. To enable limited broadcasts, the controlling application must set the TransmissionType parameter to TransmissionType_LimitedBroadcast. The camera sends the data to a specific port. See the Selecting a Destination Port section for setting up the destination port.

When the camera sends the image data using subnet directed broadcasts, the camera sends the data to all devices that are in the same subnet as the camera. To enable subnet directed broadcasts, set the TransmissionType parameter to TransmissionType_SubnetDirectedBroadcast. See the Selecting a Destination Port section for setting up the destination port that receives the data from the camera.

The disadvantage of using broadcasts is that the camera sends the data to all recipients in a network, regardless of whether or not the devices need the data. The network traffic causes a certain CPU load and consumes network bandwidth even for the devices not needing the streaming data.

When the camera sends the image data using multicasts, the data is only sent to those devices that expect the data stream. A device claims its interest in receiving the data by joining a so-called multicast group. A multicast group is defined by an IP address taken from the multicast address range (224.0.0.0 to 239.255.255.255). A member of a specific multicast group only receives data destined for this group. Data for other groups is not received. Usually, network adapters and network switches are able to filter network packets efficiently on hardware level, preventing a CPU load due to the multicast network traffic for those devices in the network, that are not part of the multicast group.

When multicasting is enabled for pylon, pylon automatically takes care of joining and leaving the multicast groups defined by the destination IP address. Keep in mind that some addresses from the multicast address range are reserved for general purposes. The address range from 239.255.0.0 to 239.255.255.255 is assigned by RFC 2365 as a locally administered address space. Use adresses in this range if you are not sure.

To enable multicast streaming, the controlling application must set the TransmissionType parameter to TransmissionType_Multicast and set the DestinationAddr parameter to a valid multicast IP address. In addition to the address, a port must be specified. See the Selecting a Destination Port section for setting up the destination port that receives the data from the camera.

Example using the Device Specific Instant Camera for GigE:

Camera.GetStreamGrabberParams().DestinationAddr = "239.0.0.1";
Camera.GetStreamGrabberParams().DestinationPort = 49154;

Example (Low Level):

StreamGrabber.DestinationAddr = "239.0.0.1";
StreamGrabber.DestinationPort = 49154;

On protocol level, multicasting involves a so-called IGMP message (IGMP = Internet Group Management Protocol). To benefit from multicasting, managed network switches should be used. These managed network switches support the IGMP protocol and only forward multicast packets if there is a device connected that has joined the corresponding multicast group. If the switch does not support the IGMP protocol, multicast is equivalent to broadcasting.

When multiple cameras are to multicast in the same network, each camera should stream to a different multicast group. Streaming to different multicast groups reduces the CPU load and saves network bandwidth if the network switches used support the IGMP protocol.

Setting Up the Monitoring Application for Receiving Multicast and Broadcast Streams

Two cases must be differentiated:

For the first case, setting up a Stream Grabber for a monitoring application is quite easy. Since the controlling application has already configured the camera (i.e. the destination address and the destination port are set by the controlling application), these settings can be easily read from the camera. To let the monitoring application's Stream Grabber read the settings from the camera, the monitoring application must set the Stream Grabber's TransmissionType parameter to TransmissionType_UseCameraConfig and then call the Stream Grabber's Open() method.

Example using the Device Specific Instant Camera for GigE:

// Select transmission type. If the camera is already controlled by another application
// and configured for multicast or broadcast, the active camera configuration can be used
// (IP Address and Port will be auto set).
Camera.GetStreamGrabberParams().TransmissionType = TransmissionType_UseCameraConfig;
// Start grabbing...

Example (low level):

// Select transmission type. If the camera is already controlled by another application
// and configured for multicast or broadcast, the active camera configuration can be used
// (IP Address and Port will be auto set).
StreamGrabber.TransmissionType = TransmissionType_UseCameraConfig;
// Open the stream grabber
StreamGrabber.Open();

For the second case, where the monitoring application opens the Stream Grabber object before the controlling application opens its Stream Grabber, the TransmissionType_UseCameraConfig cannot be used. Instead, the controlling application and all monitoring applications must use the same settings for the following IP destination related parameters:

Note, when using broadcasts, the DestinationAddr parameter is read-only. Pylon will configure the camera for using the correct broadcast address.

When the controlling application and the monitoring application set the destination related parameters explicitly, it does not matter which application opens the Stream Grabber first.

Selecting a Destination Port

The destination for the camera's data is specified by the destination IP address and the destination IP port. For multicasts, the monitoring and the controlling application must configure the Stream Grabbers for the same multicast IP address. Correspondingly, for broadcasts, the monitoring and the controlling application must use the same broadcast IP address that is automatically set by pylon.

In both cases, the controlling and the monitoring application must specify the same destination port. All applications must use a port that is not used on all PCs receiving the data stream. The destination port is set by using the Stream Grabber's DestinationPort parameter.

When a monitoring application sets the TransmissionType parameter to TransmissionType_UseCameraConfig, a monitoring application automatically uses the port that the controlling application has written to the corresponding camera register. In that case, the controlling application must use a port that is not used for the PC where the controlling application is running on and that is not used for all PCs where monitoring applications are running on.

When the DestinationPort parameter is set to 0 pylon automatically selects an unused port. This is very convenient for applications using only unicast streaming. In the case of multicast or broadcast, a parameter value of 0 can only be used by the controlling application and only if the monitoring application uses the TransmissionType_UseCameraConfig value for the TransmissionType parameter. Since there is no guarantee that the port auomatically chosen by the controlling application is not used on PCs where monitoring applications are running, we do not recommend to use this auto selection mechanism for the port for broadcast or multicast.

Receiving Image Data

For broadcast or multicast setups grabbing images is realized in the same way as for unicast setups. Controlling and monitoring applications must allocate memory for grabbing, register the buffers at the Stream Grabber, enqueue the buffers and retrieve them back from the Stream Grabber. The only difference between monitoring application and controlling application is that only the controlling application starts and stops the image acquisition in the camera.

Sample Program

The pylon SDK contains a simple sample program called Grab_MultiCast. This sample illustrates how to set up a controlling application and a monitoring application for multicast.

GigE Action Commands

The action command feature lets you trigger actions in multiple GigE devices (e.g. cameras) at roughly the same time or at a defined point in time (scheduled action command) by using a single broadcast protocol message (without extra cabling). Action commands are used in cameras in the same way as for example the digital input lines.

After setting up the camera parameters required for action commands the methods Pylon::IGigETransportLayer::IssueActionCommand or Pylon::IGigETransportLayer::IssueScheduledActionCommand can be used to trigger action commands. This is shown in the sample Grab_UsingActionCommand. The Pylon::CActionTriggerConfiguration is used setup the required camera parameters in the sample. The CActionTriggerConfiguration is provided as header file. This makes it possible to see what parameters of the camera are changed. The code can be copied and modified for creating own configuration classes.

Consult the the camera User's Manual for more detailed information on action commands.

Saving and Restoring Camera Features to/from Files

This section describes how to write the current values to file for those camera features that are readable and writable. It is also demonstrated how to write the saved feature values back to the device. Saving and restoring the camera features is performed by using the Pylon::CFeaturePersistence class.

Writing the Camera Features to a File

Use the static Pylon::CFeaturePersistence::Save() method to save the current camera feature values to a file.

// ...
const char Filename[] = "NodeMap.pfs"; // Pylon Feature Stream
// ...
// Open the camera
Camera.Open();
// Save the content of the camera's node map into the file
try
{
CFeaturePersistence::Save( Filename, &Camera.GetNodeMap() );
}
{
// Error handling
cerr << "An exception occurred!" << endl << e.GetDescription() << endl;
}

Writing the Feature Values Back to the Camera

Use the static method Pylon::CFeaturePersistence::Load() to restore the camera values from a file.

// ...
const char Filename[] = "NodeMap.pfs"; // Pylon Feature Stream
// ...
// Open the camera
Camera.Open();
// Read the content of the file back to the camera's node map with validation on
try
{
CFeaturePersistence::Load( Filename, &Camera.GetNodeMap(), true );
}
{
// Error handling
cerr << "An exception occurred!" << endl << e.GetDescription() << endl;
}

The code snippets in this section are taken from the ParametrizeCamera_LoadAndSave sample.

Transferring Shading Data to the Camera

This section describes how to transfer gain shading data to the camera using the GenICam FileIO functionality.

Camera devices supporting the gain shading feature store the shading data as files in the camera's internal file system. These files are accessed using the GenICam Filestream classes provided in the GenApi/Filestream.h header file.

// Include files to use the PYLON API
using namespace Pylon;
// for file upload
// ...
// Create the camera object of the first available camera
// The camera object is used to set and get all available
// camera features.
Camera_t Camera(pTl->CreateDevice(devices[ 0 ]));
// Open the camera
Camera.Open();
// ...

GenICam defines two char based stream classes for easy to use read and write operations.

typedef ODevFileStreamBase<char, std::char_traits<char> > ODevFileStream;
typedef IDevFileStreamBase<char, std::char_traits<char> > IDevFileStream;

The ODevFileStream class is used for uploading data to the camera's file system. The IDevFileStream class is used for downloading data from the camera's file system.

Internally, the classes use the GenApi::FileProtocolAdapter class. The GenApi::FileProtocolAdapter class defines file based operations like open, close, read, and write.

One common parameter for these operations is the file name of the file to be used on the device file system. The file name must correspond to an existing file in the device file system. To retrieve a list of valid file names supported by the connected camera, read the entries of the "FileSelector" enumeration feature.

GenApi::CEnumerationPtr ptrFileSelector = Camera.GetNodeMap().GetNode("FileSelector");
if ( ptrFileSelector.IsValid() ) {
try
{
ptrFileSelector->GetEntries( entries );
for ( GenApi::NodeList_t::iterator it = entries.begin(); it != entries.end(); ++it) {
if (GenApi::IsAvailable(*it)) {
GenApi::CEnumEntryPtr pEntry = (*it);
if ( NULL != pEntry ) {
GenApi::INode* pNode = pEntry->GetNode();
GenICam::gcstring strFilename = pEntry->GetSymbolic().c_str();
// Do with strFilename whatever you want (e.g. adding to a list)
// ...
} // if
} // if
} // for
}
{
// Handle error
// ...
}
} // if

Upload Shading Data to the Camera

The camera device stores gain shading data in files named "UserGainShading1", "UserGainShading2", etc.

To upload gain shading data to the camera use the ODevFileStream class.

// Name of the file in the camera where shading data is stored
static const char CameraFilename[] = "UserGainShading1";
// ...
// Read data from local file into pBuf
char *pBuf = new char[Size];
size_t read = fread(pBuf, 1, Size, fp);
fclose(fp);
if (read != Size) {
RUNTIME_EXCEPTION("Failed to read from file '%s'\n", pLocalFilename);
}
// Transfer data to camera
ODevFileStream stream(&Camera.GetNodeMap(), CameraFilename);
stream.write(pBuf, streamsize(Size));
if (stream.fail()) {
// Do some error handling
// ...
}
stream.close();
delete[] pBuf;
// ...

This code snippet is taken from the ParametrizeCamera_Shading sample program.

Download Shading Data From the Camera

Downloading shading data from the camera to a buffer is as simple as uploading shading data.

#define FILEBUFFSIZE 1024 // size of receive buffer!
// Name of the file in the camera where shading data is stored
static const char CameraFilename[] = "UserGainShading1";
char *pBuffer = new char[FILEBUFFSIZE];
// ...
// Transfer data from camera
IDevFileStream stream(&Camera.GetNodeMap(), CameraFilename);
if (stream.fail()) {
RUNTIME_EXCEPTION("Failed to open camerafile file '%s'\n", CameraFilename);
}
int nBytesRead = 0;
if (stream.is_open()) {
do {
stream.read(pBuffer, FILEBUFFSIZE); // read max. FILEBUFFSIZE number of bytes from camera
nBytesRead = stream.gcount(); // get number of bytes read
if (nBytesRead > 0) {
// Do something with the received bytes in pBuffer e.g. writing to disk
// file.write(pBuffer, nBytesRead);
// ...
}
} while (nBytesRead == FILEBUFFSIZE); // if nBytesRead == FILEBUFFSIZE maybe there are more data to receive
}
stream.close();
delete [] pBuffer;

Waiting for Multiple Events

Wait Objects

In applications, a separate thread is often dedicated to grabbing images. Typically, this grab thread must be synchronized with other threads of the application. For example, an application may want to signal the grab thread to terminate.

Synchronization can be realized by using Wait Objects. The concept of Wait Objects introduced in the Retrieving Grabbed Images section allows not only waiting until a grabbed buffer is available, but also getting informed about other events.

Wait Objects are an abstraction of operating system specific objects that can be either signaled or non-signaled. Wait Objects provide a wait operation that blocks until the Wait Object is signaled.

While the pylon interfaces return objects of the Pylon::WaitObject type, pylon provides the Pylon::WaitObjectEx class that is to be instantiated by user applications. Use the static factory method WaitObjectEx::Create() to create these wait objects.

using namespace Pylon;
// ...

The WaitObjectEx::Signal() method is used to signal a wait object. The WaitObjectEx::Reset() method can be used to put the Wait Object into the non-signaled state.

// Put w0 into the signaled state
w0.Signal();
// Put w0 into the non-signaled state
w0.Reset();

Container for Wait Objects

The Pylon::WaitObjects class is a container for Wait Objects and provides two methods of waiting for Wait Objects stored in the container:

// Create a container and insert two wait objects
WaitObjects waitObjects;
waitObjects.Add(w0);
waitObjects.Add(w1);
// Wait for three seconds until any of the wait objects get signaled
unsigned int index;
if ( waitObjects.WaitForAny( 3000, &index) ) {
cout << "WaitObject w" << index << " has been signaled" << endl;
}
else {
cout << "Timeout occurred when waiting for wait objects" << endl;
}
// Wait for three seconds until all of the wait objects are signaled
if ( waitObjects.WaitForAll(3000) ) {
cout << "All wait objects are signaled" << endl;
} else {
cout << "Timeout occurred when waiting for wait objects" << endl;
}

Example

The following code snippets illustrate how a grab thread uses the WaitForAny() method to simultaneously wait for buffers and a termination request.

After preparing for grabbing, the application's main thread starts the grab thread and sleeps for 5 seconds.

// Start the grab thread. The grab thread starts the image acquisition
// and grabs images
cout << "Going to start the grab thread" << endl;
StartThread();
// Let the thread grab images for 5 seconds
#if defined(PYLON_WIN_BUILD)
Sleep(5000);
#elif defined(PYLON_UNIX_BUILD)
sleep(5);
#else
#error unsupported platform
#endif

The grab thread sets up a Wait Object container holding the StreamGrabber's Wait Object and a Pylon::WaitObjectEx. The latter is used by the main thread to request the termination of grabbing:

// Create and prepare the wait object container
WaitObjects waitObjects;
waitObjects.Add( Camera.GetGrabResultWaitObject() ); // Getting informed about grab results
waitObjects.Add( m_TerminationEvent ); // Getting informed about termination request

Then the grab thread enters an infinite loop that starts waiting for any of the Wait Objects:

CGrabResultPtr result; // Grab result
bool terminate = false;
while ( ! terminate ) {
if ( ! waitObjects.WaitForAny( INFINITE, &index ) ) {
// Timeout occurred, should never happen when using INFINITE
cerr << "Timeout occurred????" << endl;
break;
}

When the WaitForAny() method returns with true, the value of index is used to determine whether a buffer has been grabbed or a request to terminate grabbing is pending:

switch ( index )
{
case 0: // A grabbed buffer is available
if ( m_Camera.RetrieveResult( 0, result, TimeoutHandling_Return ) ) {
if ( result->GrabSucceeded() ) {
cout << "Successfully grabbed image " << ++nSucc << endl;
unsigned char* pPixel = (unsigned char*) result->GetBuffer();
// Process buffer .....
}
} else {
cerr << "Failed to retrieve result" << endl;
terminate = true;
}
break;
case 1: // Received a termination request
terminate = true;
break;
} // switch

The main thread signals the grab thread to terminate by calling the WaitObjectEx's Signal() method:

// Signal the thread to terminate
cout << "Going to issue termination request" << endl;
m_TerminationEvent.Signal();

Now the main thread can join with the grab thread.

Interruptible Wait Operation

It was demonstrated in the previous section how a Pylon::WaitObjectEx can be used to signal a thread to terminate.

As an alternative to using dedicated Wait Objects to get informed about external events, the WaitObject::WaitEx() method can be used for waiting. This wait operation can be interrupted. For the Windows version of pylon, WaitEx() can be interrupted by a queued APC or an I/O completion routine. The Linux and OS X version of pylon, WaitEx() can be interrupted by signals.

Example:

bool terminate = false; // Will be set to true when a signal has been detected
// Grab images until we get a signal
while ( ! terminate ) {
// Wait for the grabbed image with timeout of 10 seconds. We want to be interruptible by signals.
waitex_result_t waitResult = StreamGrabber.GetWaitObject().WaitEx(10000, true);
switch ( waitResult )
{
case waitex_timeout: // Timeout occurred, no buffer available
{
cerr << "Failed to grab image: timeout" << endl;
continue;
}
case waitex_signaled: // Buffer is available in the driver's output queue
{
// Get the grab result
GrabResult Result;
StreamGrabber.RetrieveResult(Result);
if (Grabbed == Result.Status()) {
// Grabbing was successful, process image
cout << "." << flush;
} else if (Failed == Result.Status())
{
// Error Handling
cerr << "Failed to acquire image!" << endl;
cerr << "Error code : 0x" << hex
<< Result.GetErrorCode() << endl;
cerr << "Error description : "
<< Result.GetErrorDescription() << endl;
}
// Reuse the buffer for grabbing the next image
StreamGrabber.QueueBuffer(Result.Handle(), NULL);
break;
}
case waitex_alerted: // Wait operation has been interrupted by a signal
{
cout << endl << "Got signal. Terminating application" << endl;
terminate = true;
break;
}
} // switch
} // while

This code snippet has been taken from the WaitEx sample that comes with the pylon for Linux SDK.

Corresponding to the WaitObject::WaitEx() method, the Pylon::WaitObjects class provides the interruptable WaitForAnyEx() and WaitForAllEx() methods.

Application Settings for High Performance

The following settings are recommended for applications that require image processing at a constant frame rate and with low jitter:

Note
When using real-time thread priorities, be very careful to ensure that no high-priority thread consumes all of the available CPU time.

Programming Using the pylon Low Level API

The Instant Camera classes use the Low Level API for operation. That means that the previous API, now called the Low Level API, is still part of the pylon C++ API and will be in the future. The Low Level API can be used for existing applications and for rare highly advanced use cases that cannot be covered using the Instant Camera classes. More information about how to program using the Low Level API can be found here.

Migrating Existing Code for Using USB Camera Devices

Changes of Parameter Names and Behavior

Features, like 'Gain', are named according to the GenICam Standard Feature Naming Convention (SFNC). The SFNC defines a common set of features, their behavior, and the related parameter names. This ensures the interoperability of cameras from different camera vendors. Cameras compliant with the USB3 Vision standard are based on the SFNC version 2.0. Basler GigE and Firewire cameras are based on previous SFNC versions. Accordingly, the behavior of these cameras and some parameters names will be different.

SFNC Version Handling

If your code has to work with multiple camera device types that are compatible with different SFNC versions, you can use the GetSfncVersion() method to handle differences in parameter name and behavior. GetSfncVersion() is also supplied as function for the use with legacy code using the Low Level API.

Example for Generic Parameter Access :

// Check to see which Standard Feature Naming Convention (SFNC) is used by the camera device.
if ( camera.GetSfncVersion() >= Sfnc_2_0_0)
{
// Access the Gain float type node. This node is available for USB camera devices.
// USB camera devices are compliant to SFNC version 2.0.
CFloatPtr gain( nodemap.GetNode( "Gain"));
double newGain = gain->GetMin() + ((gain->GetMax() - gain->GetMin()) / 2);
gain->SetValue(newGain);
cout << "Gain (50%) : " << gain->GetValue() << " (Min: " << gain->GetMin() << "; Max: " << gain->GetMax() << ")" << endl;
}
else
{
// Access the GainRaw integer type node. This node is available for IIDC 1394 and GigE camera devices.
CIntegerPtr gainRaw( nodemap.GetNode( "GainRaw"));
int64_t newGainRaw = gainRaw->GetMin() + ((gainRaw->GetMax() - gainRaw->GetMin()) / 2);
// Make sure the calculated value is valid.
newGainRaw = Adjust(newGainRaw, gainRaw->GetMin(), gainRaw->GetMax(), gainRaw->GetInc());
gainRaw->SetValue(newGainRaw);
cout << "Gain (50%) : " << gainRaw->GetValue() << " (Min: " << gainRaw->GetMin() << "; Max: " << gainRaw->GetMax() << "; Inc: " << gainRaw->GetInc() << ")" << endl;
}

Conditional compilation can be used to handle differences in parameter name and behavior when using Native Parameter Access :

#ifdef USE_USB
double newGain = camera.Gain.GetMin() + ((camera.Gain.GetMax() - camera.Gain.GetMin()) / 2);
camera.Gain.SetValue(newGain);
cout << "Gain (50%) : " << camera.Gain.GetValue() << " (Min: " << camera.Gain.GetMin() << "; Max: " << camera.Gain.GetMax() << ")" << endl;
#else
int64_t newGainRaw = camera.GainRaw.GetMin() + ((camera.GainRaw.GetMax() - camera.GainRaw.GetMin()) / 2);
// Make sure the calculated value is valid
newGainRaw = Adjust(newGainRaw, camera.GainRaw.GetMin(), camera.GainRaw.GetMax(), camera.GainRaw.GetInc());
camera.GainRaw.SetValue(newGainRaw);
cout << "Gain (50%) : " << camera.GainRaw.GetValue() << " (Min: " << camera.GainRaw.GetMin() << "; Max: " << camera.GainRaw.GetMax() << "; Inc: " << camera.GainRaw.GetInc() << ")" << endl;
#endif

List of Changes

The following tables show how to map previous parameter names to their equivalents defined in SFNC 2.0. Some previous parameters have no direct equivalents. There are previous parameters, however, that can still be accessed using the so-called alias . The alias is another representation of the original parameter. Usually the alias provides an Integer representation of a Float parameter.

The following code snippet shows how to get the alias:

Attention
Depending on the camera device model the alias does not provide a proper name, display name, tool tip, or description. The value range of an alias node can change when updating the camera firmware.
// Get the alias node of a parameter.
// The alias is another representation of the original parameter.
GenApi::CFloatPtr gain( camera.GetNodeMap().GetNode( "Gain"));
if ( gain.IsValid())
{
// Get the integer representation of Gain.
// Depending on the camera device model the alias does not provide a proper name, display name, tool tip, or description.
// The value range of an alias node can change when updating the camera firmware.
gainRaw = gain->GetNode()->GetAlias();
}

The following table shows how to map changes for parameters:

Attention
The actual changes between previous cameras and SFNC 2.0 compliant cameras depend on the used models and the used camera firmware versions. It is possible that changes are not listed in the tables below. Other sources of information regarding changes between camera models are the Camera User's Manuals or the information shown in the Pylon Viewer tool.
Previous Parameter Name SFNC 2.0 Equivalent Parameter Type Comments
AcquisitionFrameCount AcquisitionBurstFrameCount Integer
AcquisitionFrameRateAbs AcquisitionFrameRate Float
AcquisitionStartEventFrameID EventFrameBurstStartFrameID Integer
AcquisitionStartEventTimestamp EventFrameBurstStartTimestamp Integer
AcquisitionStartOvertriggerEventFrameID EventFrameBurstStartOvertriggerFrameID Integer
AcquisitionStartOvertriggerEventTimestamp EventFrameBurstStartOvertriggerTimestamp Integer
AutoExposureTimeAbsLowerLimit AutoExposureTimeLowerLimit Float
AutoExposureTimeAbsUpperLimit AutoExposureTimeUpperLimit Float
AutoFunctionAOIUsageIntensity AutoFunctionAOIUseBrightness Boolean
AutoFunctionAOIUsageWhiteBalance AutoFunctionAOIUseWhiteBalance Boolean
AutoGainRawLowerLimit Alias of AutoGainLowerLimit Integer
AutoGainRawUpperLimit Alias of AutoGainUpperLimit Integer
AutoTargetValue Alias of AutoTargetBrightness Integer
BalanceRatioAbs BalanceRatio Float
BalanceRatioRaw Alias of BalanceRatio Integer
BlackLevelAbs BlackLevel Float
BlackLevelRaw Alias of BlackLevel Integer
ChunkExposureTimeRaw Integer ChunkExposureTimeRaw has been replaced with ChunkExposureTime. ChunkExposureTime is of type float.
ChunkFrameCounter Integer ChunkFrameCounter has been replaced with ChunkCounterSelector and ChunkCounterValue.
ChunkGainAll Integer ChunkGainAll has been replaced with ChunkGain. ChunkGain is of type float.
ColorAdjustmentEnable Boolean ColorAdjustmentEnable has been removed. The color adjustment is always enabled.
ColorAdjustmentHueRaw Alias of ColorAdjustmentHue Integer
ColorAdjustmentReset Command ColorAdjustmentReset has been removed.
ColorAdjustmentSaturationRaw Alias of ColorAdjustmentSaturation Integer
ColorTransformationValueRaw Alias of ColorTransformationValue Integer
DefaultSetSelector Enumeration See additional entries in UserSetSelector.
ExposureEndEventFrameID EventExposureEndFrameID Integer
ExposureEndEventTimestamp EventExposureEndTimestamp Integer
ExposureTimeAbs ExposureTime Float
ExposureTimeRaw Alias of ExposureTime Integer
FrameStartEventFrameID EventFrameStartFrameID Integer
FrameStartEventTimestamp EventFrameStartTimestamp Integer
FrameStartOvertriggerEventFrameID EventFrameStartOvertriggerFrameID Integer
FrameStartOvertriggerEventTimestamp EventFrameStartOvertriggerTimestamp Integer
GainAbs Gain Float
GainRaw Alias of Gain Integer
GammaEnable Boolean GammaEnable has been removed. Gamma is always enabled.
GammaSelector Enumeration The sRGB setting is automatically applied when LineSourcePreset is set to any other value than Off.
GlobalResetReleaseModeEnable Boolean GlobalResetReleaseModeEnable has been replaced with the enumeration ShutterMode.
LightSourceSelector LightSourcePreset Enumeration
LineDebouncerTimeAbs LineDebouncerTime Float
MinOutPulseWidthAbs LineMinimumOutputPulseWidth Float
MinOutPulseWidthRaw Alias of LineMinimumOutputPulseWidth Integer
ParameterSelector RemoveParameterLimitSelector Enumeration
ProcessedRawEnable Boolean ProcessedRawEnable has been removed because it is not needed anymore. The camera uses nondestructive Bayer demosaicing now.
ReadoutTimeAbs SensorReadoutTime Float
ResultingFrameRateAbs ResultingFrameRate Float
SequenceAddressBitSelector Enumeration
SequenceAdvanceMode Enumeration
SequenceAsyncAdvance Command Configure a asynchronous signal as trigger source of path 1.
SequenceAsyncRestart Command Configure a asynchronous signal as trigger source of path 0.
SequenceBitSource Enumeration
SequenceControlConfig Category
SequenceControlSelector Enumeration
SequenceControlSource Enumeration
SequenceCurrentSet SequencerSetActive Integer
SequenceEnable Boolean Replaced by SequencerConfigurationMode and SequencerMode.
SequenceSetExecutions Integer
SequenceSetIndex SequencerSetSelector Integer
SequenceSetLoad SequencerSetLoad Command
SequenceSetStore SequencerSetSave Command
SequenceSetTotalNumber Integer Use the range of the SequencerSetSelector.
TestImageSelector TestPattern Enumeration TestPattern instead of TestImageSelector is used for dart and pulse camera models.
TimerDelayAbs TimerDelay Float
TimerDelayRaw Alias of TimerDelay Integer
TimerDelayTimebaseAbs Float The time base is always 1us.
TimerDurationAbs TimerDuration Float
TimerDurationRaw Alias of TimerDuration Integer
TimerDurationTimebaseAbs Float The time base is always 1us.
TriggerDelayAbs TriggerDelay Float
UserSetDefaultSelector UserSetDefault Enumeration

The following table shows how to map changes for enumeration values:

Previous Enumeration Name Previous Enumeration Value Name Value Name SFNC 2.0 Comments
AcquisitionStatusSelector AcquisitionTriggerWait FrameBurstTriggerWait
AutoFunctionProfile ExposureMinimum MinimizeExposureTime
AutoFunctionProfile GainMinimum MinimizeGain
ChunkSelector GainAll Gain The gain value is reported via the ChunkGain node as float.
ChunkSelector Height Height is part of the image information regardless of the chunk mode setting.
ChunkSelector OffsetX OffsetX is part of the image information regardless of the chunk mode setting.
ChunkSelector OffsetY OffsetY is part of the image information regardless of the chunk mode setting.
ChunkSelector PixelFormat PixelFormat is part of the image information regardless of the chunk mode setting.
ChunkSelector Stride Stride is part of the image information regardless of the chunk mode setting.
ChunkSelector Width Width is part of the image information regardless of the chunk mode setting.
EventNotification GenICamEvent On
EventSelector AcquisitionStartOvertrigger FrameBurstStartOvertrigger
EventSelector AcquisitionStart FrameBurstStart
LightSourceSelector Daylight Daylight5000K
LightSourceSelector Tungsten Tungsten2800K
LineSelector Out1 The operation mode of an I/O-Pin is chosen using the LineMode Selector.
LineSelector Out2 The operation mode of an I/O-Pin is chosen using the LineMode Selector.
LineSelector Out3 The operation mode of an I/O-Pin is chosen using the LineMode Selector.
LineSelector Out4 The operation mode of an I/O-Pin is chosen using the LineMode Selector.
LineSource AcquisitionTriggerWait FrameBurstTriggerWait
LineSource UserOutput Use UserOutput1, UserOutput2, or UserOutput3 etc. instead.
PixelFormat BayerBG12Packed The pixel format BayerBG12p is provided by USB camera devices. The memory layout of pixel format BayerBG12Packed and pixel format BayerBG12p is different. See the camera User's Manuals for more information on pixel formats.
PixelFormat BayerGB12Packed The pixel format BayerGB12p is provided by USB camera devices. The memory layout of pixel format BayerGB12Packed and pixel format BayerGB12p is different. See the camera User's Manuals for more information on pixel formats.
PixelFormat BayerGR12Packed The pixel format BayerGR12p is provided by USB camera devices. The memory layout of pixel format BayerGR12Packed and pixel format BayerGR12p is different. See the camera User's Manuals for more information on pixel formats.
PixelFormat BayerRG12Packed The pixel format BayerRG12p is provided by USB camera devices. The memory layout of pixel format BayerRG12Packed and pixel format BayerRG12p is different. See the camera User's Manuals for more information on pixel formats.
PixelFormat BGR10Packed BGR10
PixelFormat BGR12Packed BGR12
PixelFormat BGR8Packed BGR8
PixelFormat BGRA8Packed BGRa8
PixelFormat Mono10Packed The pixel format Mono10p is provided by USB camera devices. The memory layout of pixel format Mono10Packed and pixel format Mono10p is different. See the camera User's Manuals for more information on pixel formats.
PixelFormat Mono12Packed The pixel format Mono12p is provided by USB camera devices. The memory layout of pixel format Mono12Packed and pixel format Mono12p is different. See the camera User's Manuals for more information on pixel formats.
PixelFormat Mono1Packed Mono1p
PixelFormat Mono2Packed Mono2p
PixelFormat Mono4Packed Mono4p
PixelFormat RGB10Packed RGB10
PixelFormat RGB12Packed RGB12
PixelFormat RGB16Packed RGB16
PixelFormat RGB8Packed RGB8
PixelFormat RGBA8Packed RGBa8
PixelFormat YUV411Packed YCbCr411_8
PixelFormat YUV422_YUYV_Packed YCbCr422_8
PixelFormat YUV444Packed YCbCr8
TestImageSelector Testimage1 GreyDiagonalSawtooth8 GreyDiagonalSawtooth8 instead of Testimage1 is used for dart and pulse camera models.
TriggerSelector AcquisitionStart FrameBurstStart

Migration Mode

The pylon USB device offers a migration mode for convenience. If the migration mode is activated, the changes shown in the tables above are automatically mapped, if a mapping exists. The migration mode supports writing code when working with multiple camera device types that are compatible with different SFNC versions. However, it is strongly recommended to adapt existing code to be SFNC 2.0 compatible, if you are only working with SFNC 2.0 compatible cameras instead of using the migration mode.

Attention
An existing application can use features that cannot be automatically mapped. Code that accesses parameters that cannot be automatically mapped needs to be adapted before the use with SFNC 2.0 compatible cameras. The behavior of a parameter can have changed too, e.g. regarding the value range. A careful check is required. Furthermore automatically mapped alias nodes do not provide a proper name, display name, tool tip, or description. The value range of an alias node can change when updating the camera firmware.
// Create an instant camera object with the camera device found first.
Pylon::CInstantCamera camera( CTlFactory::GetInstance().CreateFirstDevice());
// Activate the migration mode if available.
// This allows existing code to work with SFNC 2.0 compatible cameras
// with minimal changes, depending on the used features.
GenApi::CBooleanPtr migrationModeEnable( camera.GetTLNodeMap().GetNode("MigrationModeEnable"));
if ( GenApi::IsWritable( migrationModeEnable))
{
migrationModeEnable->SetValue( true);
}
// Open the camera.
camera.Open();
// For demonstration purposes only, access ExposureTimeAbs alias ExposureTime.
GenApi::CFloatPtr exposureTime( camera.GetNodeMap().GetNode( "ExposureTimeAbs"));
// ExposureTime can still be accessed. The same node is returned.
GenApi::CFloatPtr exposureTime2( camera.GetNodeMap().GetNode( "ExposureTime"));

The migration mode is implemented using proxy objects. If the migration mode is enabled, the call to Pylon::CInstantCamera::GetNodeMap() (or Pylon::IPylonDevice::GetNodeMap()) returns a proxy object that wraps the original node map. The node map proxy maps parameter changes in calls to GenApi::INodeMap::GetNode(). All other calls are forwarded to the original node map. Enumerations having renamed enumeration values are also wrapped by a proxy, e.g. the enumeration PixelFormat. The enumeration proxy maps value name changes in the calls GenApi::IValue::ToString(), GenApi::IValue::FromString(), and GenApi::IEnumeration::GetEntryByName(). All other calls are forwarded to the original enumeration node.

Differences in Image Transport

The image transport of USB camera devices and Firewire or GigE camera devices is different. Firewire and GigE camera devices automatically send image data to the PC when available. If the PC is not ready to receive the image data because no grab buffer is available, the image data sent by the camera device is dropped. For USB camera devices the PC has to actively request the image data. Grabbed images are stored in the frame buffer of the USB camera device until the PC requests the image data. If the frame buffer of the USB camera device is full, newly acquired frames will be dropped. Old images in the frame buffer of the USB camera device will be grabbed first the next time the PC requests image data. After that, newly acquired images are grabbed.

The Grab Strategy Upcoming Image is Not Available For USB Camera Devices

The Upcoming Image grab strategy uses the effect that images are automatically dropped, if no buffer is available (queued) on the PC when using GigE or Firewire cameras. USB camera devices work differently as described above. Old images can still be stored in the frame buffer of the USB camera device. That's why the Upcoming Image strategy cannot be used for USB camera devices. An exception will be thrown, if a USB camera device is used together with the Upcoming Image grab strategy.

USB Camera Devices and Block ID

Image data is transferred between a PC and a USB camera device using a certain sequence of data packets. In the rare case of an error during the image transport, the image data stream between PC and USB camera device is reset automatically, e.g. if the image packet sequence is out of sync. The image data stream reset causes the Block ID delivered by the USB camera device to start again at zero. Pylon indicates this error condition by setting the Block ID of the grab result to its highest possible value (UINT64_MAX) for all subsequent grab results. A Block ID of UINT64_MAX is invalid and cannot be used in any further operations. The image data and other grab result data are not affected by the Block ID being invalid. The grabbing needs to be stopped and restarted to recover from this error condition if the application uses the Block ID. The Block ID starts at zero if the grabbing is restarted.

Note
Applications that are still using the Low Level API can use the Pylon::IStreamGrabber::CancelGrab method. Calling CancelGrab resets the image stream between PC and USB camera device, too. Therefore, the value of the Block ID is set to UINT64_MAX for all subsequent grab results after calling CancelGrab.

pylon 5.0.5
Copyright © 2006-2016 Basler AG (Thu Aug 11 2016 18:01:27)