Project

General

Profile

Feature #227

Ability for ThreatScripts to Add to URL Scan List

Added by Luke Murphey over 13 years ago. Updated over 13 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
Scan Engine
Target version:
Start date:
11/05/2010
Due date:
11/05/2010
% Done:

100%

Estimated time:
3.00 h

Description

ThreatScripts ought to be able to add URLs to the list to scan. This is a nice feature because this allows the resource extraction to be modified with updated definitions which could extract links from the robots.txt files, PDFs, CSS files, etc.

Disabling the scripts would thus disable extraction of the relevant links. Below would be an example of the code snippet:

var url = "http://google.com";
ScanURLs.add(url);

Related issues

Blocks ThreatFactor NSIA - Feature #62: Parse CSS and JavaScript in Detection Engine New 04/08/2010

History

#1 Updated by Luke Murphey over 13 years ago

  • Target version changed from 1.0 (Release) to 1.0.1

#2 Updated by Luke Murphey over 13 years ago

  • Assignee set to Luke Murphey

#3 Updated by Luke Murphey over 13 years ago

  • Due date set to 11/05/2010
  • Start date changed from 10/25/2010 to 11/05/2010
  • Estimated time set to 3.00 h

#4 Updated by Luke Murphey over 13 years ago

  • Category set to Scan Engine

#5 Updated by Luke Murphey over 13 years ago

  • Status changed from New to In Progress
  • % Done changed from 0 to 50

Implemented in r988. Need additional testing to confirm that the loaded URLs are processed by the scan engine.

#6 Updated by Luke Murphey over 13 years ago

Initial testing shows that the link extraction definitions do work. Below was the definition tested:

/*
 * Name: ScannerSupport.LinkExtraction.Test
 * ID: 1000000
 * Version: 1
 * Message: This is a test
 * Severity: Medium
 */

importPackage(Packages.ThreatScript);
importPackage(Packages.HTTP);
function analyze( httpResponse, operation, environment ){
    a = new Array();
    a[0] = new URL("http://Threatfactor.com/TEST");
    return new Result( false, "Definition did not match the input", a);
}

Links extracted this way will be added to the scan list even if they do not match the domain name restriction.

#7 Updated by Luke Murphey over 13 years ago

  • % Done changed from 50 to 70

Implemented methods that allow URLs to be designated as needing to match the domain limit or not in r990.

#8 Updated by Luke Murphey over 13 years ago

The changes were tested with the following definition which only tries to access "http://Threatfactor.com/TEST" if the domain does not match but always tries to access "http://Threatfactor.com/TEST_ALL".

/*
 * Name: ScannerSupport.LinkExtraction.Test
 * ID: 1000339
 * Version: 1
 * Message: This is a test
 * Severity: Medium
 */

importPackage(Packages.ThreatScript);
importPackage(Packages.HTTP);

function analyze( httpResponse, operation, environment ){
    a = new Array();
    a[0] = new URL("http://Threatfactor.com/TEST");
    result = new Result( false, "Definition did not match the input", a);
    result.addURL( new URL("http://Threatfactor.com/TEST_ALL"), true);
    return result;
}

#9 Updated by Luke Murphey over 13 years ago

  • Status changed from In Progress to Closed
  • % Done changed from 70 to 100

Changed definition such that URLs are only extracted from definitions if they are not filtered out by the scan policy. This feature has been fully implemented in r994.

Also available in: Atom PDF