Important Update: Community URLs redirect issues are partially resolved. Learn More. .

cancel
Showing results for 
Search instead for 
Did you mean: 
MattIreton
Archer Employee
Archer Employee

As of 6.6 release, the JavaScript Transporter Data Feed functionality released in 6.4 became even more flexible and powerful with the addition of an application managed output writer.  This new capability removes the need to work  around the inherent heap memory constraints of JS Node engine. This feature will allow you to ingest larger data sets in a single feed.

 

Multi-Step JavaScript Transporter Refresher

Let’s start off with a quick refresher of JavaScript Transporter.

 

    • Multi-Step JavaScript Transporter gives users the freedom to use JavaScript to feed data into Archer with its various capabilities:
      • Native web request methods to make API requests
      • Native parsers for both XML and JSON
      • Easy syntax

 

    • These capabilities resulted in:
      • Ability to make multiple API calls within a single Data Feed
      • Ability to string multiple dependent API calls together removes the need to develop and deploy custom-designed API middle-ware components.
      • Ability to implement more complex integrations with Archer Data Feeds.

 

    • Memory constraints prior to 6.6 release
      • Original implementation of JavaScript Transporter allowed for only a one time write to disk before moving onto the next step of the data feed.
      • This meant the entire output structure how to be built and stored in a single variable before being passed onto the Transform step of the data feed.
      • The JS Node engine the transporter leverages has a built in limit of 268 MB for string variables or 2 GB for binary variables
      • This made managing large data sets returned by external API’s difficult sometimes in resulting in needing to break up the ingestion into several data feed runs.

 

6.6 Solution - Application Managed Output Writer

 

    • New custom function provides an output writer that is managed by the data feed application at run time.
    • Allows user to provide data as items that can be processed individually, and then data feed application runtime shelves them into multiple files that can be processed independently in the next step.
    • No limitation on amount of data that Archer data feed can process, since files can be loaded and processed independently.
    • Output Writer internally handles the formatting of data for different data types, writing the data to file(s), and the creation of a manifest file which is used in the next step of the data feed.
    • Temporary files created by output writer are automatically cleaned up when data feed completes.


Output Writer Instantiation

 

The instantiation of an output writer is required to use this feature and this instantiation can only occur once in any user script. The output writer supports three types: JSON, XML, and CSV. The instantiation takes two parameters: type and initParams. The type of output writer controls how the transporter formats the data. The initParams are limited to RootObj, RootNode, or Header for JSON, XML, CSV types respectively.

 

Here is an example of each type:

 

JSON: json writer.png

 

XML: xml writer.png

 

CSV: csv writer.png

 

Leveraging Output Writer In Your Script

1. Creation of output writer
creation of writer.png

 

 2. Write items to file using the output writer instance. The script can write to file as many times as necessary. Typically the output writer would replace code you use to build the output variable in your current script.

writing files.png

 

 3. Since the output writer has already flushed the necessary data to files available for the next step, there is no need to pass data feed output variable in the callback.

 

new callback.png

 

 

It is important to note that the original method of passing data through an output variable is still supported. Any existing script will continue to execute exactly as it does prior to 6.6.

 

 old callback.png

                                                               

As you can see, however, when dealing with large data sets being able to periodically write data to a file is a huge advantage. I am excited to see how this feature is leveraged to quickly integrate more coveted information from even more data source into your Risk Management solution.

18 Comments
WilsonHack
Contributor III

This is really powerful. Thank you Matt Ireton‌. Quick question: is there a recommended "batch size" if we're using this feature? 100 records at a time? 1k? 10k? I'm sure it depends on how much data, how many fields, which types of fields, etc. but some general guidance would be very helpful.

 

Wilson

MattIreton
Archer Employee
Archer Employee

Wilson,

 

Great question! 

The application actually manages the size and content of the files. For each approximately 20MB of data, it will write a separate  and logically independent file. These separate files will be collected in a manifest file that is automatically passed onto the next step of the data feed. It is just recommended that you write complete and logically content every time so the application can more efficiently manage the generation of the logically independent files.

ScottNess3
Collaborator III

I'm currently using the OOB javascript transporter datafeeds related to ITSRM for Qualys integration and Vulnerability Historical Data.  Will the associated java script files receive updates related this enhancement?

MattIreton
Archer Employee
Archer Employee

Scott,

 

Yes, updating the ITSRM datafeeds to use this enhancement is definitely a high priority item in the current backlog but has not yet been targeted for a specific release.

 

Thanks,

 

Matt

KyleCribbs1
Collaborator III

Where can we peak into the code for this output writer? I believe is is essential for troubleshooting issues we may be having (which we are haha). 

Anonymous
Not applicable

Kyle Cribbs,

 

The source code of Archer is not publicly accessible for obvious reasons. If the blog posts, free friday tech huddle replays, and/or product documentation don't have what you want, raise a support case requesting they be updated with what you need to be successful in using the feature.

AmarnathreddyK1
Contributor III

Matt Ireton‌ I am able to generate XML file using outputwriter but the XML file starts with root tag "<undefined>". Is there anything I need to change in the following statement?

outputWriter = context.OutputWriter.create("XML", {RootObj: "<DETECTION_LIST>"});

 output file:

<undefined><DETECTION>

-----------

-----------

------------

</DETECTION>
</undefined>

MattIreton
Archer Employee
Archer Employee

Hi Amaranthreddy,

 

I think you are getting the <undefined> nodes because you have specified RootObj instead of RootNode for XML. This means RootNode is getting set to "undefined".  Try this for your instantiation of the output writer.

 

outputWriter = context.OutputWriter.create("XML", {RootNode: "DETECTION_LIST"});
AmarnathreddyK1
Contributor III

Thanks Matt Ireton‌ it worked

Anonymous
Not applicable

Scott Hagemeyer

 

What recommendations do you have for non-RSA employees developing .js code if the OutputWriter functionality isn't available? Should we write our own class/functions to mimic the functionality and then remove it before uploading to Archer? Just trying to figure out how non-RSA employees would be able to code using that functionality. Any best practices around that? Thanks!

Anonymous
Not applicable

Douglas Campbell,

 

The functionality of OutputWriter is available to you. It's just the inner workings of it in the source code that is not. If you have access to the JavaScript Transporter enabled in the ACP, then you can leverage OutputWriter.

Anonymous
Not applicable

Thanks Scott Hagemeyer, I failed to ask a decent question.

 

How do I develop and test in a node.js prompt environment or VS Code (not in Archer)? I receive a message saying "outputWriter is not defined ReferenceError: outputWriter is not defined". I was seeking a strategy to be able to develop applications. Thanks!

 

C:\Users\docampbell\Documents\Solutions\TVM\DataFeeds>node Signed-TenableSC_v1_0_13_DC.js
2020-08-18 08:09:10 :: INFO  :: Datafeed Init
2020-08-18 08:09:10 :: WARN  :: Testing Mode Active
2020-08-18 08:09:10 :: INFO  :: Last Datafeed Run Time: 1970-01-10T00:00:00.000Z
2020-08-18 08:09:10 :: INFO  :: Using StartDate:  1970-01-09T00:00:00.000Z
2020-08-18 08:09:10 :: INFO  :: Using EndDate:    2020-08-19T13:09:10.237Z
2020-08-18 08:09:15 :: ERROR  :: outputWriter is not defined
ReferenceError: outputWriter is not defined
    at SendCompletedRecordsToArcher.resolve (C:\Users\docampbell\Documents\Solutions\TVM\DataFeeds\Signed-TenableSC_v1_0_13_DC.js:203:13)
    at new Promise (<anonymous>)
    at SendCompletedRecordsToArcher (C:\Users\docampbell\Documents\Solutions\TVM\DataFeeds\Signed-TenableSC_v1_0_13_DC.js:196:12)
    at GetAllPages.ProcessPages (C:\Users\docampbell\Documents\Solutions\TVM\DataFeeds\Signed-TenableSC_v1_0_13_DC.js:1041:19)
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:189:7)
(node:7432) UnhandledPromiseRejectionWarning: ReferenceError: outputWriter is not defined
    at ReturnToArcher (C:\Users\docampbell\Documents\Solutions\TVM\DataFeeds\Signed-TenableSC_v1_0_13_DC.js:181:5)
    at Init.then.then.catch.e (C:\Users\docampbell\Documents\Solutions\TVM\DataFeeds\Signed-TenableSC_v1_0_13_DC.js:1082:9)
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:189:7)
(node:7432) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 3)
(node:7432) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Anonymous
Not applicable

Douglas Campbell,

 

Ah! I see the angle now, thanks for the clarification. I don't believe that OutputWriter would be accessible in this scenario, unless there's a way to take the Archer DLL containing the code, and reference it from your code. I'm not nearly enough of a JavaScript expert to answer that question though. Jeffrey Nylen, is this possible? Or can you think of a way to allow folks to test JavaScript Transporter outside of Archer while leveraging features like OutputWriter?

Anonymous
Not applicable

Again, I apologize for not being clear the first time. So thanks for bearing with me!

 

I've done all my JST development in VS Code and frequently test in a Node.js prompt successfully for almost 2 years and the OutputWriter is the only issue I've come across so far.

 

I suspect we would just create a new function that determines if we are in testing mode or within DFM. If it is test mode, then output to a normal file. Something to this effect....

 

function OutputData(sData){
     if (typeof context === 'undefined'){
          var fs = require('fs');
          fs.writeFile("mycoolfile.xml",sData,function(err)
          {
               if(err){
                    LogError("ERROR SAVING FILE IN TEST MODE: " + err);
               }
          });
     }
     else{
          outputWriter = context.OutputWriter.create('XML', sData);
     }
}

 

What are your thoughts on that approach?

 

The only downside is that we don't have true visibility into the data format/structure Archer receives from the OutputWriter and therefore handling it in the data feed configuration becomes more of a challenge to develop and troubleshoot.

Anonymous
Not applicable

Douglas Campbell,

 

That approach looks solid to me. I'd also be curious to know if we can somehow take the Archer DLL with OutputWriter in it and reference it from the out of Archer IDE. I'll reach out to DjfgrCu3qFhiVJvOHP4TDxpNOuxXwhjEqTqWjYqZbqY= again and see if that's possible.

 

For the data format/structure, Matt Ireton‌ do we have any examples or standards by which OutputWriter produces the files?

UgurUmutAyberk2
Contributor

Hello all,

 

I am dealing with same problem. I am trying to test TenableSC_v1_0_13.js outside of Archer on Nodejs.

"context is not defined" error occured because root module DevCallbacks.js is not available with integration package? How could I find module DevCallbacks.js  on RSA Archer?

 

Douglas CampbellScott Hagemeyer

 

 

 

    C:\entegrasyon\tenable\TenableSC_v1_0_13_Test001.js:26
    const outputWriter = context.OutputWriter.create('XML', { RootNode: 'ROOT' });
    ^

    ReferenceError: context is not defined
    at Object.<anonymous> (C:\entegrasyon\tenable\TenableSC_v1_0_13_Test001.js:26:22)
    at Module._compile (internal/modules/cjs/loader.js:689:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
    at Module.load (internal/modules/cjs/loader.js:599:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:538:12)
    at Function.Module._load (internal/modules/cjs/loader.js:530:3)
    at Function.Module.runMain (internal/modules/cjs/loader.js:742:12)
    at startup (internal/bootstrap/node.js:283:19)
    at bootstrapNodeJSCore (internal/bootstrap/node.js:743:3)

    C:\entegrasyon\tenable\TenableSC_v1_0_13_Test001.js:26
    const outputWriter = context.OutputWriter.create('XML', { RootNode: 'ROOT' });
    ^

    ReferenceError: context is not defined
    at Object.<anonymous> (C:\entegrasyon\tenable\TenableSC_v1_0_13_Test001.js:26:22)
    at Module._compile (internal/modules/cjs/loader.js:689:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
    at Module.load (internal/modules/cjs/loader.js:599:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:538:12)
    at Function.Module._load (internal/modules/cjs/loader.js:530:3)
    at Function.Module.runMain (internal/modules/cjs/loader.js:742:12)
    at startup (internal/bootstrap/node.js:283:19)
    at bootstrapNodeJSCore (internal/bootstrap/node.js:743:3)

    Anonymous
    Not applicable

    There's probably better ways to handle this, but here's how I approached it for the same data feed for Tenable:

     

    Comment this line:

    //const outputWriter = context.OutputWriter.create('XML', { RootNode: 'ROOT' });

     

    Add this right after the line above:

    var sOutputFilename = 'tenabletestVULNS.xml';
    const fsExport = require('fs');
    let writeStream = fsExport.createWriteStream(sOutputFilename);
    AppendToFile('<ROOT>');

     

    Replace these two functions with this code:

    function ReturnToArcher(err) {
        if (err) {
            LogError('Datafeed Failure due to error.');
            //callback(BuildMessageArray(), { output: null, previousRunContext: JSON.stringify(transportSettings.previousRunContext) });
        } else {
            LogInfo('Sending Complete to Archer.');
            //callback(null, { output: null, previousRunContext: JSON.stringify(transportSettings.previousRunContext) });
            //Upon returning to Archer, need to add closing tag
            AppendToFile('</ROOT>');
            // the finish event is emitted when all data has been flushed from the stream
            writeStream.on('finish', () => {
                console.log('COMPLETE! Wrote all data to file.');
            });
            // close the stream
            writeStream.end();
        }
        return Promise.resolve(true);
    }
    // write completed records to disk
    function SendCompletedRecordsToArcher(cddata) {
        return new Promise(resolve => {
            // don't write empty data
            //if (transportSettings.debug) {
                //LogInfo(`[${cd.callId}]   Sending ${data.length} to Archer`);
            //}
            if (data && data.length > 0) {
                //ORIGINAL CODE for XML....may need to iterate through to make a csv file though.
                const xmlData = jsonArrayToXMLBuffer(datacd.callId);
                //Updated to use stream
                AppendToFile(xmlData);
            }
            resolve(true);
        });
    }

     

    Create a simple new function:

    function AppendToFile(data)
    {
        writeStream.write(data);
    }

     

    I think that's all that was done to get the data written locally so I could run the script in a local node.js command prompt. Let me know if that doesn't do the trick.

    UgurUmutAyberk2
    Contributor

    This script created source XML file and it does the trick exactly needed at Node.js side. Thank you.