This page reviews the GroundWork Monitor Automation Subsystem.
About The Automation Subsystem
In practice, the discovery process is relatively straightforward, and usually only requires an administrator to answer a few confirmation prompts. However, there are a large number of options and variables that must be defined before this process can be started.
Apart from the discovery subsystem (described in the preceding section), the Auto Discovery service also provides an Automation subsystem that can be used to import structured data into GroundWork Monitor's configuration database. Using this subsystem, administrators can have the Auto Discovery service read input data from a simple text file and synchronize the configuration database with that data.
In fact, the automation subsystem is used for just this purpose by the discovery subsystem whenever the results of a discovery process need to be imported into the configuration database. However, the automation subsystem is not limited to being used in this manner, and can be used to synchronize the configuration database with any data source that is able to generate a suitable input file.
GroundWork Monitor uses schema definitions to store the configuration options for a particular automation process, similar to the way in which it uses discovery definitions to store options and variables for a discovery process. In this model, administrators can create separate automation definitions for each of the data-import processes they may need, with each discrete schema definition storing the options that are required for a specific automation process to complete successfully.
For example, administrators can create a schema definition that tells the automation subsystem to read a discovery log for host names and service definitions, and then update the existing configuration database entries with the discovered data. Meanwhile, another schema definition can be created that tells the automation subsystem to recreate the entire configuration database from the input data, which may be useful when the configuration information is being fed from an outside management system. Once these schema definitions have been created, administrators can simply select the most appropriate schema definition for the task at hand and immediately begin the associated automation process, with all of the appropriate options already selected.
As with discovery definitions, a schema definition describes an entire import operation from beginning to end, and incorporates all of the options that are needed for the full process to run through completion. More specifically, each schema definition includes the options that govern the type of synchronization to be performed, the input source file, a description of the input file's layout, and the processing rules that should be applied to the input data. These elements are discussed in detail in the following sections.
The Automation Process
In simple terms, the automation subsystem reads in lines of text from an input file, and then breaks each line into fields of data. As each field is extracted from the input file, one or more matching rules dictate what will happen with that piece of data next (it may be mapped to a field in the GroundWork Monitor configuration database, or it may be discarded, and so forth). Once all of the input data has been parsed and analyzed, the automation subsystem can present the final results to the operator for review, and then updates the configuration database with the final results.
In order for this process to work properly, GroundWork Monitor requires the input text to be line-oriented, with each line of input data representing a single row. However, GroundWork Monitor does not require the input data to have fixed-width columns, but instead assumes that the data uses a fixed number of variable-length fields. In this model, each field of data in the input file is expected to be separated from its neighbors by a user-defined sequence of characters, which can either be a single character such as a simple comma (for importing CSV files), or can be multiple characters, and it can even use regular expressions if needed.
Administrators must define the fields that are present in each row of data, along with the processing rules that will be applied to each field as the data is extracted. As the automation subsystem reads each field, the data is mapped to a temporary grid of rows and columns that is conceptually similar to a spreadsheet. In this model, each row of data contains a single record, while each column identifies a specific piece of data, with the intersection of a row and column (a cell) containing each individual piece of data from each record.
Meanwhile, each logical column has one or more processing rules attached to it that tell the automation subsystem what should happen to when a specific field is extracted from the input file. These rules may manipulate the data, discard the data, or map the data to a field in the GroundWork Monitor database, among other things. Each column can have multiple processing rules, and each rule will be executed in sequence until all of the rules for that column have been executed.
After all of the input data has been extracted and processed, the final results can be displayed to the operator for confirmation, and inserted into the GroundWork Monitor configuration database.
As an example of how this might work, consider the following sample input:
cisco-router, 192.168.10.1, snmp windows-server, 192.168.10.21, wmi linux-server, 192.168.10.34, ping
In the sample above, the data consists of three rows of text, with each row containing fields for hostname, IP address, and host profile, each of which are separated from their neighbors by a single comma character. In order for this data to be useful to the automation subsystem, a schema definition would need to be created which specified the location of the input file, the field separator in use (a single comma), the logical columns in the source data (name, address, and profile), and the processing rules for each of the columns.
Once an appropriate schema definition was created, the automation subsystem would then use those settings to begin an automation process. In that scenario, the automation subsystem would examine each row of data in the specified input file for the separator character, extract the relevant data from each field, and apply the matching rules that had been defined for each column, with this processing repeating for each individual row of data. Once the entire input file had been fully read, the automation subsystem would then present the results to the administrator for review, before finally updating the configuration database with the extracted data.
Although the simple example above can be made to work with some limited applications, the default schema definition provided with GroundWork Monitor is slightly more complicated, and should be used for processes that need to synchronize host and service profile data with the configuration database. Specifically, the default schema definition uses the following column layout, with each field separated from its neighbor by a pair of semi-colon characters:
|A host record must include a hostname and IP address field in order for the record to be associated with a specific device. If an input record does not contain both of those pieces of data then the record will be interpreted as an update to the previous record, and the data that is present in the current record will append or overwrite the data collected from the previous record. This model is intentional, and allows multiple discovery methods to add supplemental profile data or fill in host data with additional details as the data is discovered.|
For illustration purposes, /usr/local/groundwork/core/monarch/automation/data/sample-multiline.txt contains sample input data which demonstrates these concepts. That file is typical of input data that is generated by discovery methods which use the default schema definition.
A critical aspect of the automation process is the manner in which the input data and the configuration database are synchronized together. In the usual scenario, an administrator will likely want to simply add new records or update existing records, while leaving any other existing configuration entries unmodified. In some cases, however, it may be desirable or necessary to destructively recreate the configuration database entirely, especially if a separate management tool is the primary management point for network resources and assets.
The synchronization behavior is governed by the schema type option. One schema type must be set for every schema definition. This setting affects other configuration options, so the schema type must be set whenever a new schema definition is first created, and the schema type associated with a schema definition cannot be changed after it has been created.
- host-import - The host-import schema type can add new configuration entries to the configuration system, and can also update existing host configuration entries, but will not modify any other existing configuration entries.
The host-import schema type is the only schema type that can be used for discovery processes, and should generally be used for most automation tasks unless another schema type is known to be more appropriate.
- host-profile-sync - The host-profile-sync schema type will destructively synchronize the configuration database with the contents of the import file, effectively causing the entire configuration database to be recreated every time a schema definition is executed. This schema type is intended to be used when a separate management tool (such as Cacti) is the primary source of configuration information. Given the destructive nature of this schema type, it should be used very prudently
- other-sync - The other-sync schema type will only modify existing configuration entries, and will not add new entries, nor will it destructively synchronize the configuration database. It is intended to be used for making bulk modifications to configuration entries. For example, it can be used to change the contact groups for specific host entries, or to change the contact groups themselves.
When the input file is read by the automation process, each input record is analyzed and broken into its discrete data fields, which are then extracted and assigned to their corresponding columns in an intermediary structure. As the contents of each field are assigned to their column, one or more matching rules are applied to data, with these rules dictating what is to become of that data.
In the simplest scenario, the data in a column will be mapped to a field in the GroundWork Monitor configuration database, where it will eventually be used to create or update an entry in the configuration database. However, this is a very simplistic scenario, and there are several other possible courses of action. For example, the default schema definitions include matching rules that discard comments and duplicate hosts, link device types to host groups based on operating system identifiers, and more, with all of this work being performed by different matching rules.
Technically, matching rules are conditional processing rules that are similar to traditional if-then rules. The conditional processing is accomplished through the use of string-comparison functions that preface a defined action. For example, a matching rule can be written that says if the device identifier is 'linux' then associate the device with the 'Linux Servers' host group while another matching rule can be written that says if the first character of data is '#' then discard the entire record.
Once a conditional processing filter has been met, the action part of the matching rule is executed. This is where the real meat of the automation system comes into play. The most common action is a simple statement that instructs GroundWork Monitor to use the field data directly, which is useful for simple tasks such as assigning a host name from the field value. However, more complex actions are also provided, some of which include additional conditional processing filters of their own. For example, some rules will test to see if a configuration object already exists, with different courses of action being taken depending on the outcome of that secondary evaluation.
Each matching rule contains just one string-comparison function, and just one action, although multiple matching rules can be defined for each unique column. In this scenario, each of the different matching will be executed in sequence, until all of the rules have been processed.
More specifically, each matching rule consists of at least four configuration options, although some rules require additional configuration data and therefore have more options. However, the matching filter and the rule directive are the most important two.
The match attribute stores the conditional processing filter for the current matching rule (IE, it stores the if part of the if-then statement). The remainder of the matching rule will only be processed if this comparison returns a true result. If the field data does not match the filter, the data will not be processed by the current matching rule, although another matching rule further ahead may be executed if the data matches that rule's filter. The comparison types that are available for use are as follows:
- use-value-as-is - No comparison is performed, and the matching rule is always executed, even if the field is empty. This is essentially the same as always returning a true result for the conditional processing.
- is-null - The matching rule will only execute if there is no data. This could happen because the original input provided an empty field, or because an earlier matching rule for the column deleted the contents.
- exact - The matching rule will only execute if the field exactly matches the specified string. Matching rules are not case sensitive.
- begins-with - The matching rule will only execute if the field begins with the specified string.
- ends-with - The matching rule will only execute if the field ends with the specified string.
- contains - The matching rule will only execute if the specified string is found somewhere within the field.
- use-perl-reg-exp - Use this matching type if you want to specify your own Perl regular expression for string comparison. This may be needed if you want to force case-sensitive matching, or if the input data is particularly noisy. Note that you can use a single pair of parenthesis to specify a return sub-string, as per normal perl coding syntax. Only the first parenthesized sub-string will be returned.
- service-definition - This is a special match type that looks for subordinate field structures within the current data, and then creates service definitions based on the subordinate data. This may be needed when devices have a variable number of services that cannot be easily represented by a fixed-field data structure. Refer to the online help for detailed information about this match type and the field syntax requirements.
If a matching rule uses one of the string-comparison techniques described above (such as contains or begins-with) or uses a Perl regular expression, then you must also provide the matching string to be used for the comparison operation. If a matching rule does not use a string-comparison technique (as is the case with use-value-as-is and is-null), this attribute will not be available.
The rule attribute stores the processing statement for the current matching rule (IE, it stores the then part of the if-then statement). The rule will only be executed if the match filter (as described in the preceding section) returns a true result.
|The available rule directives will be determined by the matching filter in use. For example, the is-null matching type will only ever result in a true condition if the current field does not contain data. As such, it cannot be used with rule directives that use field data to populate a configuration entry, and can only be used with rule directives that allow the administrator to specify all needed values.|
The rules types that are available for use are as follows:
- Assign value to - This rule simply assigns the current field value to the specified attribute. However, this field only works with attributes that can be used to uniquely identify the current host object. If this rule is selected, you must also choose the attribute type to be populated with the field data. The supported attribute types are host name, host alias, host address, primary record, host description, and host profile.
- Convert dword and assign to - This is a special rule that converts a double-word field value to regular text and assigns the resulting value to the specified attribute. This field only works with attributes that can be used to uniquely identify the current host object. If this rule is selected, you must also choose the attribute type to be populated with the field data. The supported attribute types are host name, host alias, host address, primary record, host description, and host profile.
- Assign host profile - This rule simply assigns a specified host profile to the currently selected host object. If this rule is selected, you must also choose the host profile to be used.
- Assign host profile if undefined - This is a two-part rule that first checks to see if the currently selected host object already has a host profile defined from another matching rule. If not, the current field value will be used for that attribute. If this rule is selected, you must also choose the host profile to be used.
- Assign service - This rule simply assigns a specified service entry to the currently selected host object, with the field data being used as the service entry name. If this rule is selected, you must also choose the service type to be used for the new service entry.
- Resolve to parent - This rule indicates that the current column contains the name of the current device' network parent. Note that this rule directive is only available if the use-value-as-is matching filter has been selected.
- Assign object(s) - This rule simply assigns a specified configuration attribute to the currently selected host object, and populates the attribute with a specified value. This rule is intended to be used in those cases where the input data does not provide the necessary attribute value, and as such is only available if the is-null matching filter has been selected (if you wish to use the field data for the value, use the assign object if exists rule instead). If this rule is selected, you must also choose the attribute type and value to use. The supported attribute types are configuration group, contact group, host group, parent, and service profile.
- Assign object if exists - This is a special two-part rule that first checks to see if the current field contains any data. If so, the specified configuration attribute will be created for the currently selected host object, with the field data being used for the attribute value. Note that this rule is only available when the use-value-as-is matching filter has been selected, since it is the only filter that is capable of providing an empty field without prior detection. If this rule is selected, you must also choose a configuration attribute type to be created. The supported attribute types are configuration group, contact group, host group, host profile, parent, service profile, and service entry.
- Assign value if undefined - This is a two-part rule that first checks to see if the currently selected host object already has the specified configuration attribute defined from another matching rule. If not, the current field value will be used for that attribute. If this rule is selected, you must also choose the attribute type to be populated. The supported attribute types are host name, host alias, host address, primary record, host description, and host profile.
- Add if not exists and assign object - This is a special two-part rule that is intended to be used for creating certain types of global configuration objects on an as-needed basis. This rule first checks to see if a specified configuration object already exists with the same name as the field data. If the object does not exist, then a new configuration object will be created with that name. If this rule is selected, you must also choose a configuration object type for comparison purposes. The supported configuration object types are configuration groups, contact groups, and host groups.
- Discard record - This rule simply causes the automation processor to discard the current record altogether. This rule is used by the default schema definitions to ignore comments in the input file by simply discarding all rows that begin with a # character.
- Discard if match existing record - This is a two-part rule that first checks to see if a host configuration object already exists with the same name as the field data. If so, the current record is simply discarded. This is useful for situations where you only want new devices to be added to the configuration database, and do not want existing host entries to be overwritten.
- Skip column record - This rule is used whenever a column contains multiple subordinate fields (as typically occurs when SNMP interfaces have been enumerated), and causes the automation processor to skip the current subordinate record in the current column.
By combining matching rules, it is possible to come up with some comprehensive processing rules. For example, the default schema definitions include three matching rules that are associated with the first column of data that perform the following steps:
- If the field data begins with a # character, discard the record since it is a comment and not a record.
- Use the field data for the host name attribute of a new record.
- If the field data matches a host entry in the configuration database already, discard the record since it is a duplicate.
Another good example of a column with multiple rules can be found in the default schema definition under the Description column.
Managing Automation Schema Definitions
Automation schema definitions are managed with the Automation console screen. To access the Automation console screen, select the Auto Discovery menu item from the main menu, and then select the Automation menu item in the top menu bar.
The default Automation console screen is shown. As can be seen from that example, GroundWork Monitor provides a default schema definition called GroundWork-Discovery-Pro which is created during installation.
To edit or execute an existing schema definition, select it from the list and click the Next >> button, which will result in the schema definition editor screen being loaded. From there, the schema definition can be modified or executed. These topics are discussed in the Editing Schema Definitions and Running Schema Definitions sections below.
New discovery definitions can be created by clicking the New Schema button. This topic is discussed in the Creating Schema Definitions section below.
Creating Schema Definitions
To create a new schema definition, click the New Schema button on the automation console screen. A new screen like the one here will be shown.
Once the fields have been filled in to your satisfaction, click the Add button to continue. At this point the schema definition editor screen will be displayed, with the new schema definition values already loaded. If you do not wish to continue creating a new discovery definition, click the Cancel button to return to the main automation console screen.
- Name - Enter a name for your new schema definition.
- Schema type - The drop-down list shows the three schema types that are available. One of the schema types must be selected to continue.
- Create from template - Optional Schema definitions can be saved as templates that allow them to be partially reused (see the discussion in the next section for more details).
If you have already saved a schema definition as a template, you can reuse its options and variables by selecting it in the drop-down list. If you do not want to inherit any options from any other definitions, leave the drop-down list empty.
Choosing a schema template will cause all of the defined options to be inherited, including the schema type. As such, the schema type chosen in the drop-down list above will be disregarded.
Editing Schema Definitions
To edit the options and variables associated with a schema definition, select the schema definition in the automation console screen and then click the Next >> button. This will result in the schema definition editor screen being displayed with the schema definition values already loaded (this screen will also be activated when a new schema definition is created, as described in the preceding section), similar to the screen shown below.
|All of the schema types share a common layout, except for a couple of fields. In particular, all of the schema types share common settings and options for attributes such as field separator and rule definitions. However, the host-import and host-profile-sync schema types has an additional field for indicating whether or not the device name and IP address should be determined automatically, and another field for defining the default host profile that should be used for undefined entries. Meanwhile, the other-sync schema type has a separate field for defining the attribute that should be used as the primary synchronization key.|
The figure shows the automation schema editor for the host-import schema type, using the values from the default GroundWork-Discovery-Pro schema definition.
The figure to the right shows the automation schema editor for the other-sync schema type, with no preloaded values. The options and fields in the schema definition editor screen are listed below.
Once you have finished making changes to the schema definition, click the Save button to make the changes permanent.
If you would like to create a template of the schema definition that can be applied to future definitions, click the Save As Template button. Templates are stored as textual XML files in the /usr/local/groundwork/core/monarch/automation/templates directory on the GroundWork Monitor server. Template files can be copied into the same directory on another GroundWork Monitor server and will be immediately usable on that system.
If you want to initiate a automation process using the current values but without making the changes permanent, click the Process Records button. If you want to cancel your changes and return to the main automation console screen, click the Close button.
Schema Definition Options and Fields
- Smart Names (host-import and host-profile-sync only) - GroundWork Monitor's configuration database stores multiple host-specific attributes, including a device short name, a device alias name (which is typically the fully-qualified domain name), and the IP address of the device. If the schema definition uses the host-import or host-profile-sync schema types, and if the input file does not provide all of these attributes, then GroundWork Monitor can attempt to detect the data automatically with a variety of lookup queries. Note that the input data must provide at least one of these three attributes in order for this process to work correctly. Also note that this field does not exist in the schema definition editor screen that is used with other-sync schema types, since the relevant attributes are not modified by that schema type. Due to the overhead required in performing these lookups, this option is disabled by default, although it is enabled in the GroundWork-Discovery-Pro schema definition.
- Primary Sync Object (other-sync only) - In order for the other-sync schema type behavior to operate correctly, it must be able to match input data to existing entries. This drop-down list shows the fields that are usable for this purpose. By default, entries are matched by the host-specific short names, but they can also be matched by broad-based categories such as host group or contact group if those configuration entries need to be modified.
- Data Source - This field specifies the path to the input file that will be read and parsed for the automation process. Note that this field is not used for automation processes that are started by the discovery process. Instead, those jobs use a filename that is generated dynamically from the name of the current discovery definition, with the input file being stored in the /usr/local/groundwork/automation/data directory (this allows multiple discovery jobs to keep their discovery data somewhat independent of each other).
- Delimiter - This field is used to specify the delimiter that is used to separate input fields from each other, when multiple fields are provided in the input data. As stated above, each row of data in the input file may only specify a single record, but each record may contain multiple fields if they have a common delimiter. Users have two choices for entering data here. First, the drop-down list provides a convenient way to choose from several common field delimiter characters. Meanwhile, the edit box allows administrators to specify their own character sequences. The check box between the drop-down list and edit box is used to indicate that the latter should be used, but is not selected by default. In the case of the default GroundWork-Discovery-Pro schema definition, a pair of semi-colon characters is used as the field delimiter.
- Data Columns (and subsequent fields) - This section of the schema definition editor screen is used to define columns of data in the input file, and also used to define the matching rules that are associated with each column. These topics are explored in detail in the Matching Rules section above, while instructions for using these options are provided in the Managing Data Columns and Matching Rules section below.
Managing Data Columns and Matching Rules
As was discussed earlier, the automation processor reads in lines of data from the input file, and then extracts the individual fields of data from the input by looking for a sequence of user-defined separator characters. These fields are then mapped to a temporary grid of rows and columns, with one record per row and one field per column. Each column has one or more matching rules that are executed as the field data is extracted, with the matching rules ultimately determining what will become of the data (IE, will the data be mapped to a field in the GroundWork Monitor configuration database, or will it be discarded, and so forth).
In order for this process to work, administrators must define the columns of data that are found in the input file at hand, and must also define the matching rules that dictate how each column will be processed by the automation subsystem. These tasks are performed in the schema definition editor screen, which contains a Data column section for just this purpose. The figure here shows this portion of the screen for the default GroundWork-Discovery-Pro schema definition.
The figure to the right shows this portion of the screen for a new schema definition that does not yet have any columns or matching rules defined.
|Data columns must be defined before matching rules can be assigned to the column. This is illustrated by the second figure above, which shows no columns, and therefore has no facilities for creating or editing the matching rules.|
When multiple column definitions or matching rules exist, the currently selected definition is highlighted with a dark yellow background. When only one definition exists, it is always selected and therefore always highlighted.
Managing Data Columns
To create a new data column mapping, fill in the fields in the Define Column portion of the screen to suit your requirements.
If you wish to change the field position of a data column definition, overwrite the numeric value in the Position field and click the Save button at the top of the schema definition editor screen.
If you wish to rename a data column, it must be deleted and recreated. However, this process will also result in the deletion of the matching rules that are associated with the data column.
To delete a data column definition, click the remove hyperlink next to its entry.
Once the fields have been filled in to your satisfaction, click the Add Column button to continue. At this point the schema definition editor screen will be redrawn, the new data column will be shown and automatically selected, and facilities for creating matching rules for that column will also be enabled. This is illustrated by the figure below, which shows a new schema definition with a single Test Column that was just created:
Define Column Fields
- Position - Enter a numeric value that reflects the sequence number of the data field in the input file (the first field of data is column number 1 and so forth).
- Column name - Enter a descriptive name for the data column.
Managing Matching Rules
To create a new matching rule, click on the name of the data column, and then fill in the fields in the Define Match portion of the screen to suit your requirements.
|that the process above only creates the matching rule placeholder, but the matching rule semantics have not yet been defined. In order to set these parameters, the matching rule must be edited.|
To edit a matching rule, click its hyperlinked name. This will result in an additional Match Detail dialog box being shown that allows you to change all of attributes of the matching rule, including its priority order, name, and the rule semantics. Any previously defined options for that rule will be loaded when the dialog is drawn.
|The number of fields in this dialog will vary according to the matching filter and rule directives that are used by the currently selected matching rule.|
Once the fields have been filled in to your satisfaction, click the Add Match button to continue. At this point the schema definition editor screen will be redrawn, and the new matching rule will be shown and automatically selected. This is illustrated by the figure below, which shows a new schema definition with a single Test Column and a single Test Rule that was just created.
- Order - Enter a numeric value for the matching rule priority. Multiple matching rules can be associated with a data column, and each rule will be executed in sequence according to their priority order value (matching rule 1 will be executed before matching rule 2 and so forth).
- Match task name - Enter a descriptive name for the matching rule.
The figure here shows this dialog with a new (as-yet-undefined) matching rule.
Meanwhile, the figure below shows another example of this dialog box with additional fields that reflect the matching filters and rule directives that have been selected.
Once the fields have been filled in to your satisfaction, click the Update button to continue. At this point the schema definition editor screen will be redrawn, and the modified matching rule will be shown.
To delete a matching rule, click the remove hyperlink next to its entry.
Match Detail Fields
- Order - Enter a numeric value for the matching rule priority.
- Name - Enter a descriptive name for the matching rule.
- Match - This drop-down list contains the available matching filter types. The matching filter types control the conditional branching portion of the matching rule, and also determine the rule directives that are available in the Rule field below. The available match types are discussed in detail in the Match Filters section above.
- Match String - If the match field above uses a string-comparison filter (such as exact or contains), a new field will appear that allows you to enter the string value that you want to use for the match. This field is not displayed if a string-comparison matching filter is not used.
- Rule - This drop-down list contains the available rule directives, as determined by the matching filter type selected in the field above. The rules determine what will actually happen with the field data when it is read into the column (IE, whether it will be mapped to a field, discarded, or something else).
- Object - If the rule directive allows a value to be defined to a configuration object (regardless of the object type or the value being defined), the Match Detail dialog will provide an Object drop-down list that allows you to choose the object to be changed. This field is not displayed if the rule directive does not modify a configuration object.
- Object-specific value - If you have chosen a rule directive that allows an object to be set to a predefined value, a multiple-item list box will be shown that contains all of the known values for that object. In the example shown above, the administrator has chosen a rule directive that allows the host group attribute for the current device to be set to a predefined host group, so an additional dialog element is displayed that allows the administrator to choose from one of the known host groups.
Running Automation Processes
A schema definition can be executed as part of a discovery process, or as an independent operation. In the former case, the automation process may be initiated from the discovery console screen or the discovery definition editor screen as described in the Running Discovery Processes section above. In the latter case, automation processes can only be executed by clicking the Process Records button in the schema definition editor screen.
When the automation process begins, the contents of the input file will be parsed as described in the Automation Process section above. Once this process completes, you will be presented with a summary screen that shows all of the discovered configuration objects and their major attributes. The figure below shows an example of what this screen can look like, using the sample Multi-Line-Data schema definition and input file:
This screen provides a summary view of the configuration entries that have been detected in the input file, and also provides mechanisms for editing the schema and individual entries, and options to select subsets of the entries for further processing.
The center of the screen shows a scrollable child window that contains the configuration objects that are pending some kind of action (such as a new record waiting to be added, or an existing record waiting to be updated or deleted). Existing entries in the configuration database that are not being modified will not be shown. Similarly, entries that previously had a pending operation but have already been processed or discarded will no longer be pending, and are removed from the summary screen when those changes occur.
Each entry in the summary screen has a checkbox to the right which allows you to select an individual record for manipulation. If you wish to work with multiple entries simultaneously, you can click each entry's individual checkbox, or you can click the Check All button in the bottom right corner of the screen to select all of the entries. You can also click the hypertext color names in the color legend at the top of the screen to select entries of that type (such as selecting all host records that are flagged for deletion, which is sometimes useful for avoiding the destructive effects of a full synchronization).
Once the desired entries have been selected, click the Process Records at the top of the screen to integrate those records with the GroundWork Monitor configuration database, or click the Discard button to drop those records from the pending list. If you wish to change the schema definition that was used to generate the entries, click the Edit Schema button at the top of the screen, and you will be returned to the schema editor screen (described in the Editing Schema Definitions section above). If you wish to cancel the automation process entirely, click the Close button to return to the main menu.
Each entry in the summary screen has a color that reflects its condition. For example, host records for devices that do not already exist in the configuration database have a green coloring, while host records for devices that do already exist have a light blue coloring (by default, existing devices are not shown in the summary list, since the default schema definitions drop duplicate host records). The color legend at the top of the window shows these colors and their meanings.
Host records in the child window can be resorted according to the primary key, host name, host alias, or host address by choosing the desired sorting method from the Sort by drop-down list.
The major attributes for each entry can be reviewed by moving your mouse over the hypertext elements in each row. For example, moving your mouse over the Alias hyperlink in the bern entry will show that the device's alias has been detected as bern.alps.com, while moving your mouse over the Services hyperlink for the router-1 entry will show all of the interface-specific service entries that have been discovered for that device.
You can change the attributes of multiple records simultaneously by clicking the Enable Overrides button in the bottom left corner of the screen. This will cause a new dialog area to be displayed at the bottom of the screen, similar to the this image.
This screen allows you to choose the attribute that you want to override, and the attribute value(s) that you want to force onto the selected records. You can also choose whether to completely replace the discovered values (Replace), or if you want to append the selected value(s) to the discovered values (Merge). Note that only the attributes that have a selected checkbox next to their name will be modified.
Once the desired modifications have been made, select one or more entries from the summary list, and click the Process Records button at the top of the screen to process the selected entries with the requested changes. If you decide that you do not wish to force overrides on any records at this time, click the Disable Overrides button to close the dialog.
The individual attributes for an entry can be edited by clicking the edit hyperlink to the right of the entry. Clicking this link will cause a record editor screen to be displayed with the major attributes of the selected record loaded. This figure shows the editor screen for the zurich device entry.
As can be seen, the record editor screen allows the administrator to override any of the major configuration attributes that have been assigned to the entry. This includes the discovered services and the arguments associated with each service definition (selecting a service entry will allow you to edit the arguments).
Note that any changes you make will only be recognized if commit the change by clicking the Process Record button. You can also discard the record, or cancel your changes (resulting in the loss of your edits).