Updated doxygen for ingest modules and removed specific pages in favor of package descriptions

This commit is contained in:
Brian Carrier 2012-07-20 12:19:50 -04:00
parent 8c28dd7767
commit 5cdc24fb78
13 changed files with 348 additions and 424 deletions

View File

@ -1,50 +1,53 @@
// @@@ VERIFY THAT we mention add case wizard in here
/**
* \package org.sleuthkit.autopsy.casemodule
* \section data Accessing Case Data
* A case contains one or more disk images and is the highest-level unit of an investigation.
* All data in a case will be stored in a single database and configuration file.
* A case must be open before analysis can occur. You will use a {@link org.sleuthkit.autopsy.casemodule.Case#Case Case}
* object to get access to the data being analyzed.
* Case settings are stored in an XML file. See the {@link org.sleuthkit.autopsy.casemodule.XMLCaseManagement#XMLCaseManagement() XMLCaseManagement}
* class for more details.
* Currently, only one case can be opened at a time.
* To determine the open case, use the static {@link org.sleuthkit.autopsy.casemodule.Case#getCurrentCase() Case.getCurrentCase()} method.
* Once you have the object for the currently open case, {@link org.sleuthkit.autopsy.casemodule.Case#getRootObjects() Case.getRootObjects()}
* will return the top-level Sleuth Kit Content modules. You can then get their children to go down the tree of data types.
*
* \section events Case Events
* To receive an event when cases are opened, closed, or changed, use the {@link org.sleuthkit.autopsy.casemodule.Case#addPropertyChangeListener(PropertyChangeListener)
* addPropertyChangeListener} method to register your class as a PropertyChangeListener.
* This is most commonly required when developing a new {@link org.sleuthkit.autopsy.corecomponentinterfaces.DataExplorer#DataExplorer() DataExplorer}
* module that needs to get data about the currently opened case.
*
* \section add_image Add Image Process
* The sleuthkit library performs most the actual work of adding the image to the database and Autopsy provides the user interface, calls methods to set up and control and finalize the process.
* Add image process is first invoked by org.sleuthkit.autopsy.casemodule.AddImageAction.
* org.sleuthkit.autopsy.casemodule.AddImageWizardIterator instantiates and manages the wizard panels.
* A background worker thread is spawned in AddImgTask class. The work is delegated to org.sleuthkit.datamodel.AddImageProcess, which calls into native sleuthkit methods via SleuthkitJNI interface.
* The entire process is enclosed within a database transaction and the transaction is not committed until user finalizes the process.
* User can also interrupt the ongoing add image process, which results in a special stop call in sleuthkit. The stop call sets a special stop flag internally in sleuthkit.
* The flag is checked by the sleutkit code as it is processing the image and,
* if set, it will result in breaking out of any current processing loops and methods, and return from sleuthkit.
* The worker thread in Autopsy will terminate and revert will be called to back out of the current transaction.
* During add image process, sleuthkit library reads the image and populates the TSK SQLite database with the image meta-data.
* Rows are inserted into the following tables: tsk_objects, tsk_file_layout,tsk_image_info, tsk_image_names, tsk_vs_info, tsk_vs_parts, tsk_fs_info, tsk_files.
* Refer to http://wiki.sleuthkit.org/index.php?title=SQLite_Database_v2_Schema for more info on the TSK database schema.
* After image has been processed successfully and after the user confirmation, the transaction is committed to the database.
* Errors from processing the image in sleuthkit are propagated using org.sleuthkit.datamodel.TskCoreException and org.sleuthkit.datamodel.TskDataException java exceptions.
* The errors are logged and can be reviewed by the user form the wizard.
* org.sleuthkit.datamodel.TskCoreException is handled by the wizard as a critical, unrecoverable error condition with TSK core, resulting in the interruption of the add image process.
* org.sleuthkit.datamodel.TskDataException, pertaining to an error associated with the data itself (such as invalid volume offset), is treated as a warning - the process still continues because there are likely data image that can be still read.
*
* \section concurrency Concurrency and locking
* Autopsy is a multi-threaded application; besides threads associated with the GUI, event dispatching and Netbeans RCP framework,
* the application uses threads to support concurrent user-driven processes.
* For instance, user can add another image to the database while ingest is running on previously added images.
* During the add image process, a database lock is acquired using org.sleuthkit.autopsy.casemodule.SleuthkitCase.dbWriteLock() to ensure exclusive access to the database resource.
* Once the lock is acquired by the add image process, other Autopsy threads trying to access the database as acquire the lock (such as ingest modules) will block for the duration of add image process.
* The database lock is implemented with SQLite database in mind, which does not support concurrent writes. The database lock is released with org.sleuthkit.autopsy.casemodule.SleuthkitCase.dbWriteUnlock() when the add image process has ended.
* The database lock is used for all database access methods in org.sleuthkit.autopsy.casemodule.SleuthkitCase.
\package org.sleuthkit.autopsy.casemodule
\section casemodule_overview Overview
The org.sleuthkit.autopsy.casemodule Module is responsible for organizing a case. A case contains one or more disk images and is the highest-level unit of an investigation. All data in a case will be stored in a single database and configuration file. A case must be open before analysis can occur. You will use a org.sleuthkit.autopsy.casemodule.Case object to get access to the data being analyzed.
Case settings are stored in an XML file. See the org.sleuthkit.autopsy.casemodule.XMLCaseManagement class for more details.
Currently, only one case can be opened at a time. To determine the open case, use the static org.sleuthkit.autopsy.casemodule.Case.getCurrentCase() method. Once you have the object for the currently open case, org.sleuthkit.autopsy.casemodule.Case.getRootObjects() will return the top-level Sleuth Kit Content modules. You can then get their children to go down the tree of data types.
\section casemodule_events Case Events
To receive an event when cases are opened, closed, or changed, use the org.sleuthkit.autopsy.casemodule.Case.addPropertyChangeListener(PropertyChangeListener) method to register your class as a PropertyChangeListener. This is most commonly required when developing a new module that needs to get data about the currently opened case.
\section casemodule_add_image Add Image Process
The sleuthkit library performs most the actual work of adding the image to the database and Autopsy provides the user interface, calls methods to set up and control and finalize the process.
Add image process is first invoked by org.sleuthkit.autopsy.casemodule.AddImageAction.
org.sleuthkit.autopsy.casemodule.AddImageWizardIterator instantiates and manages the wizard panels.
A background worker thread is spawned in AddImgTask class. The work is delegated to org.sleuthkit.datamodel.AddImageProcess, which calls into native sleuthkit methods via SleuthkitJNI interface.
The entire process is enclosed within a database transaction and the transaction is not committed until user finalizes the process.
User can also interrupt the ongoing add image process, which results in a special stop call in sleuthkit. The stop call sets a special stop flag internally in sleuthkit.
The flag is checked by the sleutkit code as it is processing the image and,
if set, it will result in breaking out of any current processing loops and methods, and return from sleuthkit.
The worker thread in Autopsy will terminate and revert will be called to back out of the current transaction.
During add image process, sleuthkit library reads the image and populates the TSK SQLite database with the image meta-data.
The resulting database will have the TSK schema (http://wiki.sleuthkit.org/index.php?title=SQLite_Database_v2_Schema).
After image has been processed successfully and after the user confirmation, the transaction is committed to the database.
Errors from processing the image in sleuthkit are propagated using org.sleuthkit.datamodel.TskCoreException and org.sleuthkit.datamodel.TskDataException java exceptions.
The errors are logged and can be reviewed by the user from the wizard.
org.sleuthkit.datamodel.TskCoreException is handled by the wizard as a critical, unrecoverable error condition with TSK core, resulting in the interruption of the add image process.
org.sleuthkit.datamodel.TskDataException, pertaining to an error associated with the data itself (such as invalid volume offset), is treated as a warning - the process still continues because there are likely data image that can be still read.
\section casemodule_concurrency Concurrency and locking
Autopsy is a multi-threaded application; besides threads associated with the GUI, event dispatching and Netbeans RCP framework,
the application uses threads to support concurrent user-driven processes.
For instance, user can add another image to the database while ingest is running on previously added images.
During the add image process, a database lock is acquired using org.sleuthkit.autopsy.casemodule.SleuthkitCase.dbWriteLock() to ensure exclusive access to the database resource.
Once the lock is acquired by the add image process, other Autopsy threads trying to access the database as acquire the lock (such as ingest modules) will block for the duration of add image process.
The database lock is implemented with SQLite database in mind, which does not support concurrent writes. The database lock is released with org.sleuthkit.autopsy.casemodule.SleuthkitCase.dbWriteUnlock() when the add image process has ended. The database lock is used for all database access methods in org.sleuthkit.autopsy.casemodule.SleuthkitCase.
*/

View File

@ -52,7 +52,9 @@ import org.sleuthkit.datamodel.TskData;
* IngestManager sets up and manages ingest services
* runs them in a background thread
* notifies services when work is complete or should be interrupted
* processes messages from services via messenger proxy and posts them to GUI
* processes messages from services via messenger proxy and posts them to GUI.
*
* This runs as a singleton and you can access it using the getDefault() method.
*
*/
public class IngestManager {
@ -97,13 +99,13 @@ public class IngestManager {
private final IngestMonitor ingestMonitor = new IngestMonitor();
private enum IngestManagerEvents {
SERVICE_STARTED, SERVICE_COMPLETED, SERVICE_STOPPED, SERVICE_HAS_DATA
};
public final static String SERVICE_STARTED_EVT = IngestManagerEvents.SERVICE_STARTED.name();
public final static String SERVICE_COMPLETED_EVT = IngestManagerEvents.SERVICE_COMPLETED.name();
public final static String SERVICE_STOPPED_EVT = IngestManagerEvents.SERVICE_STOPPED.name();
public final static String SERVICE_HAS_DATA_EVT = IngestManagerEvents.SERVICE_HAS_DATA.name();
//ui
private IngestUI ui = null;
//singleton
@ -113,6 +115,10 @@ public class IngestManager {
imageIngesters = new ArrayList<IngestImageThread>();
}
/**
* Returns reference to singleton instance.
* @returns Instance of class.
*/
public static synchronized IngestManager getDefault() {
if (instance == null) {
logger.log(Level.INFO, "creating manager instance");
@ -141,6 +147,11 @@ public class IngestManager {
pcs.firePropertyChange(SERVICE_HAS_DATA_EVT, serviceDataEvent, null);
}
/**
* Returns the return value from a previously run module on the file being curently analyzed.
* @param serviceName Name of module.
* @returns Return value from that module if it was previously run.
*/
IngestServiceAbstractFile.ProcessResult getAbstractFileServiceResult(String serviceName) {
synchronized (abstractFileServiceResults) {
if (abstractFileServiceResults.containsKey(serviceName)) {
@ -335,8 +346,8 @@ public class IngestManager {
}
/**
* test if any of image of AbstractFile ingesters are running
* @return true if any service is running, false otherwise
* Test if any ingester modules are running
* @return true if any module is running, false otherwise
*/
public synchronized boolean isIngestRunning() {
if (isEnqueueRunning()) {
@ -351,6 +362,9 @@ public class IngestManager {
}
/**
* check if ingest is currently being enqueued
*/
public synchronized boolean isEnqueueRunning() {
if (queueWorker != null && !queueWorker.isDone()) {
return true;
@ -358,6 +372,9 @@ public class IngestManager {
return false;
}
/**
* check if the file-level ingest pipeline is running
*/
public synchronized boolean isFileIngestRunning() {
if (abstractFileIngester != null && !abstractFileIngester.isDone()) {
return true;
@ -365,6 +382,9 @@ public class IngestManager {
return false;
}
/**
* check the status of the image-level ingest pipeline
*/
public synchronized boolean isImageIngestRunning() {
if (imageIngesters.isEmpty()) {
return false;

View File

@ -1,7 +1,7 @@
/*
* Autopsy Forensic Browser
*
* Copyright 2011 Basis Technology Corp.
* Copyright 2011-2012 Basis Technology Corp.
* Contact: carrier <at> sleuthkit <dot> org
*
* Licensed under the Apache License, Version 2.0 (the "License");
@ -95,25 +95,25 @@ public interface IngestServiceAbstract {
/**
* There are 2 levels of configuration a service can implement: simple and advanced.
* Provides info if the module implements simple configuration.
* Used to determine if a module has implemented a simple (run-time)
* configuration panel that is displayed by the ingest manager.
*
* @return true if this service has a simple configuration
* @return true if this service has a simple (run-time) configuration
*/
public boolean hasSimpleConfiguration();
/**
* There are 2 levels of configuration a service can implement: simple and advanced.
* Provides info if the module implements advanced configuration.
* Used to determine if a module has implemented an advanced (general)
* configuration that can be used for more in-depth module configuration.
*
* @return true if this service has an advanced configuration
*/
public boolean hasAdvancedConfiguration();
/**
* If module implements simple configuration panel
* it should read its current state and make it persistent / save it in this method
* so that the new configuration will be in effect during the ingest.
* Called by the ingest manager if the simple (run-time) configuration
* panel should save its current state so that the settings can be used
* during the ingest.
*/
public void saveSimpleConfiguration();
@ -125,9 +125,10 @@ public interface IngestServiceAbstract {
public void saveAdvancedConfiguration();
/**
* Implements simple module configuration exposed to the user before ingest starts
* Only basic, most frequently used configuration options should be exposed in this panel due to size limitation
* More options, if any, should be available via userConfigureAdvanced()
* Returns a panel that displays the simple (run-time) configuration.
* This is presented to the user before ingest starts and only basic
* settings should be given here. use the advanced (general) configuration
* panel for more in-depth interfaces.
* The module is responsible for preserving / saving its configuration state
* In addition, saveSimpleConfiguration() can be used
*

View File

@ -21,36 +21,26 @@ package org.sleuthkit.autopsy.ingest;
import org.sleuthkit.datamodel.AbstractFile;
/**
* Ingest service interface that acts on every AbstractFile in the image
* Ingest service interface that will be called for every file in the image
*/
public interface IngestServiceAbstractFile extends IngestServiceAbstract {
/**
* Return value resulting from processing AbstractFile
* Can be used by manager to stop processing the file, or by subsequent service
* Can be used by IngestManager to stop processing the file, or by subsequent module
* in the pipeline as a hint to stop processing the file
*/
public enum ProcessResult {
UNKNOWN, ///< the return value is unknown for the service and current file in the pipeline
OK, ///< indication subsequent services should attempt to process the current file
STOP, ///< file processing should be stopped unconditionally (the pipeline terminates processing of the current file)
COND_STOP, ///< hit to stop processing the file; it should be decided by interested service whether to stop processing the file
ERROR ///< error encountered processing the file, hint for the depending service to skip processing the file due to error condition (such as file could not be read)
OK, ///< Indicates that processing was successful (including if the file was largely ignored by the module)
COND_STOP, ///< Indicates that the module thinks that the pipeline could stop processing, but it is up to the IngestManager to decide. Use this, for example, if a hash lookup detects that a file is known to be good and can be ignored.
STOP, ///< Indicates that the module thinks that the pipeline processing should be stopped unconditionally for the current file (this should be used sparingly for critical system errors and could be removed in future version)
ERROR, ///< Indicates that an error was encountered while processing the file, hint for later modules that depend on this module to skip processing the file due to error condition (such as file could not be read)
UNKNOWN ///< Indicates that a return value for the module is not known. This should not be returned directly by modules, but is used when modules want to learn about a return value from a previously run module.
};
/**
* Entry point to process file / directory by the service.
*
* Service does all the processing work in this method.
* It may choose to skip the file if the file is not of interest to the service.
* Results of processing, such as extracted data or analysis results should be posted to the blackboard.
*
* In a more advanced module, the module can enqueue the file
* and postpone processing until more files of interest are available.
*
* The service notifies the ingest inbox of interesting events (data, errors, warnings, infos)
* by posting ingest messages
* The service notifies data viewers by firing events using IngestManager.fireServiceDataEvent
* Entry point to process file / directory by the service. See \ref ingestmodule_making for details
* on what modules are responsible for doing.
*
* @param abstractFile file to process
* @return ProcessResult result of the processing that can be used in the pipeline as a hint whether to further process this file

View File

@ -1,7 +1,7 @@
/*
* Autopsy Forensic Browser
*
* Copyright 2011 Basis Technology Corp.
* Copyright 2011-2012 Basis Technology Corp.
* Contact: carrier <at> sleuthkit <dot> org
*
* Licensed under the Apache License, Version 2.0 (the "License");
@ -23,9 +23,10 @@ import org.sleuthkit.datamodel.BlackboardArtifact;
import org.sleuthkit.datamodel.BlackboardArtifact.ARTIFACT_TYPE;
/**
* representation of an event fired off by services when they have posted new data
* of specific type
* additionally, new artifact ids can be provided
* Event data that are fired off by ingest modules when they have posted new data
* of specific type to the blackboard.
* In its mos generic form, it only gives notice about a type of artifact and it
* can also give notice about specific IDs that can be retrieved.
*/
public class ServiceDataEvent {
@ -33,11 +34,20 @@ public class ServiceDataEvent {
private ARTIFACT_TYPE artifactType;
private Collection<BlackboardArtifact> artifactIDs;
/**
* @param serviceName Module name
* @param artifactType Type of artifact that was posted to blackboard
*/
public ServiceDataEvent(String serviceName, ARTIFACT_TYPE artifactType) {
this.serviceName = serviceName;
this.artifactType = artifactType;
}
/**
* @param serviceName Module name
* @param artifactType Type of artifact that was posted to blackboard
* @param artifactIDs List of specific artifact ID values that were added to blackboard
*/
public ServiceDataEvent(String serviceName, ARTIFACT_TYPE artifactType, Collection<BlackboardArtifact> artifactIDs) {
this(serviceName, artifactType);
this.artifactIDs = artifactIDs;

View File

@ -1,80 +1,174 @@
/**
* \package org.sleuthkit.autopsy.ingest
*
* The package provides the ingest module framework; the framework defines how ingest modules should behave and provides the infrastructure to execute them.
*
*
* The two main use cases for ingest modules are:
* - to extract information from the image and write result to blackboard,
* - to analyze data already in blackboard and add more information to it.
*
* Different ingest modules generally specialize in extracting or analyzing different type of data.
*
* There may also be special-purpose core ingest modules that run early in the ingest pipe-line. Results posted by such modules can useful to subsequent modules.
* One example of such module is Hash DB module, which determines which files are known; known files are generally treated differently.
* For instance, processing of known files can be skipped by subsequent modules in the pipeline (if chosen so), for performance reasons.
*
* The framework provides interfaces every ingest module needs to implement:
* - org.sleuthkit.autopsy.ingest.IngestServiceImage (for modules that are interested in the image as a whole, or modules that selectively pick and analyze data from the image)
* - org.sleuthkit.autopsy.ingest.IngestServiceAbstractFile (for modules that should process every file in the image).
*
* org.sleuthkit.autopsy.ingest.IngestServiceImage services run each in a separate thread, in parallel with respect to other image services.
* File services execute within the same worker thread and they run in series; for every file in the image every file ingest service is invoked.
*
* Every ingest thread is presented with a progress bar and can be cancelled by a user, or by the framework, in case of a critical system event (such as Autopsy is terminating, or an unrecoverable system error).
*
* org.sleuthkit.autopsy.ingest.IngestServiceAbstractFile services are singleton instances
* and org.sleuthkit.autopsy.ingest.IngestServiceImage service are not singletons. There could be multiple instances of
* an image based service, because multiple images can be analyzed at the same time by multiple instances of the same image service class
* (NOTE: this design might change in the future to limit number of image services executing at the same time and to introduce a better service dependency system).
*
* The interfaces define methods to initialize, process passed in data, configure the ingest service,
* query the service state and finalize the service.
*
* The framework also contains classes:
* - org.sleuthkit.autopsy.ingest.IngestManager, the ingest manager, responsible for discovery of ingest modules, enqueuing work to the modules, starting and stopping the ingest pipeline,
* propagating messages sent from the ingest modules to other Autopsy components, querying ingest status.
* - org.sleuthkit.autopsy.ingest.IngestManagerProxy, IngestManager facade used by the modules to communicate with the manager,
* - additional classes to support threading, sending messages, ingest monitoring, ingest cancellation, progress bars,
* - a user interface component (Ingest Inbox) used to display interesting messages posted by ingest modules to the user,
*
*
* Ingest module can maintain internal threads for any special processing that can occur in parallel.
* However, the module is then responsible for creating, managing and tearing down the internal threads
* and to implement locking to protect critical sections internal to the module.
* An example of a module that maintains its own threads is the KeywordSearch module,
* which provides a periodic refresh of search results as data is being indexed by the main file ingest thread.
*
* org.sleuthkit.autopsy.ingest.IngestManager provides public API other modules can use to get ingest status updates.
* A handle to ingest manager singleton instance is obtained using org.sleuthkit.autopsy.ingest.IngestManager.getDefault().
* org.sleuthkit.autopsy.ingest.IngestManager.isIngestRunning() is used to check if any ingest modules are currently running.
* There are more granular methods to check ingest status: org.sleuthkit.autopsy.ingest.IngestManager.isFileIngestRunning() to check if the file ingest pipeline is running,
* org.sleuthkit.autopsy.ingest.IngestManager.isImageIngestRunning() to check the status of the image ingest pipeline,
* org.sleuthkit.autopsy.ingest.IngestManager.isEnqueueRunning() to check if ingest is currently being enqueued,
* and org.sleuthkit.autopsy.ingest.IngestManager.isServiceRunning() to check on a per-service level.
*
* External modules can also register themselves as ingest service event listeners and receive event notifications (when a service is started, stopped, completed or has new data).
* Use a static org.sleuthkit.autopsy.ingest.IngestManager.addPropertyChangeListener() method to register a service event listener.
* Events types received are defined in IngestManagerEvents enum.
* IngestManagerEvents.SERVICE_HAS_DATA event type, a special type of event object is passed in org.sleuthkit.autopsy.ingest.ServiceDataEvent.
* The object wraps a collection of blackboard artifacts and their associated attributes that are to be reported as the new data to listeners.
* Passing the data as part of the event reduces memory footprint and decreases number of garbage collections
* of the blackboard artifacts and attributes objects (the objects are expected to be reused by the data event listeners).
*
* If a service does not pass the data as part of ServiceDataEvent (ServiceDataEvent.getArtifacts() returns null) - it is an indication that the service
* has new data but it does not implement new data tracking. The listener can then perform a blackboard query to get the latest data of interest (e.g. by artifact type).
*
* Service name and artifact type for the collection of artifacts is also passed in as as part of the service data event.
* By design, only a single type of artifacts can be contained in a single data event.
*
* At the end of the ingest, org.sleuthkit.autopsy.ingest.IngestManager itself will notify all listeners of new data being available in the blackboard.
* This ensures the listeners receive a new data notification, in case some of the modules fail to report availability of new data.
* Nevertheless, ingest module developers are encouraged to generate new data events in order to provide the real-time feedback to the user.
*
* Refer to ingest.dox and org.sleuthkit.autopsy.ingest.example examples for more details on implementing custom ingest modules.
*
*
*
*
*
*/
\package org.sleuthkit.autopsy.ingest
The package provides the ingest module framework. Ingest modules perform data analysis in a multi-threaded approach.
\section ingestmodue_contents Package Contents
The following are important classes in this package:
* Ingest Manager (org.sleuthkit.autopsy.ingest.IngestManager)
* Ingest Inbox (org.sleuthkit.autopsy.ingest.IngestMessageTopComponent)
\section ingestmodule_modules Ingest Module Basics
Ingest modules analyze data from a disk image. They typically focus on a specific type of data analysis. The modules are loaded each time that Autopsy starts. The user can choose to enable each module when they add an image to the case.
There are two types of ingest modules.
- Image-level modules are passed in a reference to an image and perform general analysis on it. These modules may query the database for a small set of files.
- File-level modules are passed in a reference to each file. The Ingest Manager chooses which files to pass and when. These modules are intended to analyze most of the files on the system or that want to examine the file content of all files (i.e. to detect file type based on signature instead of file extension).
Modules post their results to the blackboard (@@@ NEED REFERENCE FOR THIS -- org.sleuthkit.datamodel) and can query the blackboard to get the results of previous modules. For example, the hash database lookup module may want to query for a previously calculated hash value.
The ingest manager (org.sleuthkit.autopsy.ingest.IngestManager) is responsible for launching the ingest modules and passing data to them. Modules can send messages to the ingest inbox (REFERENCE) so that users can see when data has been found.
\section ingestmodule_making Making Ingest Modules
Refer to org.sleuthkit.autopsy.ingest.example for sample source code.
\subsection ingestmodule_making_api Module Interface
The first step is to choose the correct module type. Image-level modules will implement the org.sleuthkit.autopsy.ingest.IngestServiceImage interface and file-level modules will implement the org.sleuthkit.autopsy.ingest.IngestServiceAbstractFile interface.
There is a static getDefault() method that is not part of the interface, that every module (whether an image or a file service) needs to implement to return the registered static instance of the service. Refer to example code in org.sleuthkit.autopsy.ingest.example.ExampleAbstractFileIngestService.getDefault()
File-level modules need to be singleton (only a single instance at a time). To ensure this, make the constructor private. Ensure the default public file service constructor is overridden with the private one. Image-level modules require a public constructor.
The interfaces have several standard methods that need to be implemented. See the interface methods for details.
- init() method (org.sleuthkit.autopsy.ingest.IngestServiceAbstract.init()) is invoked every time an ingest session starts. A module should support multiple invocations of init() throughout the application life-cycle.
- complete() method (org.sleuthkit.autopsy.ingest.IngestServiceAbstract.complete) is invoked when an ingest session completes. The module should perform any resource (files, handles, caches) cleanup in this method and submit final results and post a final ingest inbox message.
- stop() method (org.sleuthkit.autopsy.ingest.IngestServiceAbstract.stop) is invoked on a module when an ingest session is interrupted by the user or by the system.
The method implementation should be similar to complete() in that the service should perform any cleanup work. If there is pending data to be processed or pending results to be reported by the service then the results should be rejected and ignored if stop() is invoked and the module should terminate as early as possible.
- process() method is invoked to analyze the data. The specific method depends on the module type.
Multiple images can be ingested at the same time. The current behavior is that the files from the second image are added to the list of the files from the first image. The impact of this on module development is that a file-level module could be passed in files from different images in consecutive calls to process(). New instances of image-level modules will be created when the second image is added. Therefore, image-level modules should assume that the process() method will be called only once after init() is called.
Every module should support multiple init() - process() - complete(), and init() - process() - stop() invocations.
The modules should also support multiple init() - complete() and init() - stop() invocations,
which can occur if ingest pipeline is started but no work is enqueued for the particular module.
Module developers are encouraged to use the standard java.util.logging.Logger infrastructure to log errors to the Autopsy log.
\subsection ingestmodule_making_process Process Method
The process method is where the work is done in each type of module. Some notes:
- File-level modules will be called on each file in an order determined by the IngestManager. Each module is free to quickly ignore a file based on name, signature, etc. If a module wants to know the return value from a previously run module, it should use the org.sleuthkit.autopsy.ingest.IngestManagerProxy.getAbstractFileServiceResult() method.
- Image-level modules are expected not passed in specific files and are expected to query the database to find the files that they are interested in.
\subsection ingestmodule_making_registration Module Registration
Ingest modules need to be registered using the Netbeans Lookup infrastructure in package's layer.xml file.
An example Image-level module is:
\verbatim
<file name="org-sleuthkit-autopsy-ingest-example-ExampleImageIngestService.instance">
<attr name="instanceOf" stringvalue="org.sleuthkit.autopsy.ingest.IngestServiceImage"/>
<attr name="instanceCreate" methodvalue="org.sleuthkit.autopsy.ingest.example.ExampleImageIngestService.getDefault"/>
<attr name="position" intvalue="1000"/>
</file>
\endverbatim
An example file-level module is:
\verbatim
<file name="org-sleuthkit-autopsy-ingest-example-ExampleAbstractFileIngestService.instance">
<attr name="instanceOf" stringvalue="org.sleuthkit.autopsy.ingest.IngestServiceAbstractFile"/>
<attr name="instanceCreate" methodvalue="org.sleuthkit.autopsy.ingest.example.ExampleAbstractFileIngestService.getDefault"/>
<attr name="position" intvalue="1100"/>
</file>
\endverbatim
Note the "position" attribute. The attribute determines the ordering of the module in the ingest pipeline.
Services with lower position attribute will execute earlier.
Use high numbers (higher than 1000) for non-core services. If your module depends on results from another module, use a higher position attribute to enforce the dependency.
Note: we plan to implement a more flexible and robust module dependency system in future versions of the Autopsy ingest framework.
New modules can be added to the Autopsy ingest pipeline by dropping in jar files into build/cluster/modules.
Dropped in module will be automatically recognized next time Autopsy starts.
\subsection ingestmodule_making_results Posting Results
Users will see the results from ingest modules in one of two ways:
- Results are posted to the blackboard and will be displayed in the navigation tree
- Messages are sent to the Ingest Inbox to notify a user of what has recently been found.
See the Blackboard (REFERENCE) documentation for posting results to it. Modules are free to immediately post results when they find them or they can wait. The org.sleuthkit.autopsy.ingest.IngestManagerProxy.getUpdateFrequency() method returns the maximum amount of time that a module can wait before it posts its results.
An example of waiting to post results is the keyword search module. It is resource intensive to commit the keyword index and do a keyword search. Therefore, when its process() method is invoked, it checks if it is close to the getUpdateFrequency() since the last time it did a keyword search. If it is, then it commits the index and performs the search.
When they add data to the blackboard, modules should notify listeners of the new data by periodically invoking org.sleuthkit.autopsy.ingest.IngestManagerProxy.fireServiceDataEvent() method. This allows other modules (and the main UI) to know when to query the blackboard for the latest data.
Modules should post messages to the inbox when interesting data is found. The messages includes the module name, message subject, message details, a unique message id (in the context of the originating module), and a uniqueness attribute. The uniqueness attribute is used to group similar messages together and to determine the overall importance priority of the message (if the same message is seen repeatedly, it is considered lower priority).
It is important though to not fill up the inbox with messages. These messages should only be sent if the result has a low false positive rate and will likely be relevant. For example, the hash lookup module will send messages if known bad (notable) files are found, but not if known good (NSRL) files are found. The keyword search module will send messages if a specific keyword matches, but will not send messages (by default) if a regular expression match for a URL has matches (because a lot of the URL hits will be false positives and can generate thousands of messages on a typical system).
Ingest messages have different types: there are info messages, warning messages, error messages and data messages.
The data messages contain encapsulated blackboard artifacts and attributes. The passed in data is used by the ingest inbox GUI widget to navigate to the artifact view in the directory tree, if requested by the user.
Ingest message API is defined in org.sleuthkit.autopsy.ingest.IngestMessage class. The class also contains factory methods to create new messages.
Messages are posted using org.sleuthkit.autopsy.ingest.IngestManagerProxy.postMessage() method, which accepts a message object created using one of the factory methods.
Modules should post inbox messages to the user when stop() or complete() is invoked (refer to the examples).
It is recommended to populate the description field of the complete inbox message to provide feedback to the user
summarizing the module ingest run and if any errors were encountered.
\subsection ingestmodule_making_configuration Module Configuration
Ingest modules may require user configuration. The framework
supports two levels of configuration: run-time and general. Run-time configuration
occurs when the user selects which ingest modules to run when an image is added. This level
of configuration should allow the user to enable or disable settings. General configuration is more in-depth and
may require an interface that is more powerful than simple check boxes.
As an example, the keyword search module uses both configuration methods. The run-time configuration allows the user
to choose which lists of keywords to search for. However, if the user wants to edit the lists or create lists, they
need to do go the general configuration window.
Module configuration is decentralized and module-specific; every module maintains its
own configuration state and is responsible for implementing the graphical interface.
The run-time configuration (also called simple configuration), is achieved by each
ingest module providing a JPanel. The org.sleuthkit.autopsy.ingest.IngestServiceAbstract.hasSimpleConfiguration(),
org.sleuthkit.autopsy.ingest.IngestServiceAbstract.getSimpleConfiguration(), and org.sleuthkit.autopsy.ingest.IngestServiceAbstract.saveSimpleConfiguration()
methods should be used for run-time configuration.
The general configuration is also achieved by the module returning a JPanel. A link will be provided to the general configuration from the ingest manager if it exists.
The org.sleuthkit.autopsy.ingest.IngestServiceAbstract.hasAdvancedConfiguration(),
org.sleuthkit.autopsy.ingest.IngestServiceAbstract.getAdvancedConfiguration(), and org.sleuthkit.autopsy.ingest.IngestServiceAbstract.saveAdvancedConfiguration()
methods should be used for general configuration.
\section ingestmodule_events Getting Ingest Status and Events
Other modules and core Autopsy classes may want to get the status of the ingest manager. The org.sleuthkit.autopsy.ingest.IngestManager provides access to this data with the sleuthkit.autopsy.ingest.IngestManager.isIngestRunning() method.
External modules can also register themselves as ingest service event listeners and receive event notifications (when a module is started, stopped, completed or has new data). Use the org.sleuthkit.autopsy.ingest.IngestManager.addPropertyChangeListener() method to register a service event listener. Events types received are defined in org.sleuthkit.autopsy.ingest.IngestManager.IngestManagerEvents enum.
<!-- @@@ This is a private enumÉ -->
<!-- @@@ This should be moved to the class documentation -- we should document what each event means -->
IngestManagerEvents.SERVICE_HAS_DATA event type, a special type of event object is passed in org.sleuthkit.autopsy.ingest.ServiceDataEvent.
The object wraps a collection of blackboard artifacts and their associated attributes that are to be reported as the new data to listeners.
Passing the data as part of the event reduces memory footprint and decreases number of garbage collections
of the blackboard artifacts and attributes objects (the objects are expected to be reused by the data event listeners).
If a service does not pass the data as part of ServiceDataEvent (ServiceDataEvent.getArtifacts() returns null) - it is an indication that the service
has new data but it does not implement new data tracking. The listener can then perform a blackboard query to get the latest data of interest (e.g. by artifact type).
Service name and artifact type for the collection of artifacts is also passed in as as part of the service data event.
By design, only a single type of artifacts can be contained in a single data event.
<!-- @@@ We should mention with what type of event they will be notified with. -->
At the end of the ingest, org.sleuthkit.autopsy.ingest.IngestManager itself will notify all listeners of new data being available in the blackboard.
Module developers are encouraged to generate events when they post data to the blackboard, but the IngestManger will make a final event to handle scenarios where the module did not notify listeners while it was running.
*/

View File

@ -0,0 +1,19 @@
/**
\package org.sleuthkit.autopsy.report
This package provides the reporting framework. Reporting modules allow you to get output from Autopsy in the form of XML, HTML, etc.
\section report_contents Package Contents
The following are important classes in this package:
* TODO
\section report_making Making a Report Module
TODO
*/

View File

@ -657,9 +657,6 @@ WARN_LOGFILE =
INPUT = main.dox \
design.dox \
ingest.dox \
contentViewer.dox \
report.dox \
regressionTesting.dox \
../../Case/src \
../../CoreComponentInterfaces/src \

View File

@ -1,8 +0,0 @@
/*! \page contentViewer_page Creating Content Viewers
\section cv_overview Overview
This page will talk about making content viewers. We have not written it yet.
*/

View File

@ -2,61 +2,51 @@
\section design_overview Overview
This section outlines Autopsy design from the typical analysis work flow perspective.
A typical Autopsy work flow consists of the following steps:
This page is organized based on these phases:
- A Case is created.
- Images are added to the case and ingest modules are run.
- Results are manually reviewed and searched.
- Reports are generated.
- Wizards are used to create a case and add images (org.sleuthkit.autopsy.casemodule),
- TSK database is created,
- Ingest modules are executed (org.sleuthkit.autopsy.ingest),
- Ingest modules post results to the blackboard and ingest inbox,
- Directory tree displays blackboard contents,
- Data is encapsulated into nodes and passed to table and content viewers,
- Reports can be generated.
\section design_case Creating a Case
\subsection design_overview_sub1 Creating a case
The first step in Autopsy work flow is creating a case.
User is guided with the case creation wizard to enter the case name, base directory and optional case information.
Autopsy creates the case directory (named after the case name), where all the case data is stored.
An empty TSK database is created and initialized.
For more information on the case module refer to the org.sleuthkit.autopsy.casemodule documentation.
\subsection design_overview_sub2 Adding an image
After case is created, one or more disk images can be added to the case, using the Add Image Wizard.
The process invokes internally the native sleuthkit library.
The library reads the image and populates the TSK database with the image meta-data.
For more information on the add image internals, refer to org.sleuthkit.autopsy.casemodule documentation.
\subsection design_overview_sub4 Running ingest modules
After image has been added to the case, user can select one or more ingest modules to be executed on the image.
Most ingest modules can be configured before the run using simple or advanced configuration panels (or both).
The work of ingest services is performed in the background and ingest progress is indicated by progress bars.
Autopsy provides ingest module framework in the ingest package and custom modules can be developed and added to Autopsy.
For more information refer to the org.sleuthkit.autopsy.ingest package documentation and ingest.dox
The first step in Autopsy work flow is creating a case. This is done in the org.sleuthkit.autopsy.casemodule package (package \ref casemodule_overview). This module contains the wizards needed and deals with how to store the information. You should not need to do much modifications in this package. But, you will want to use the org.sleuthkit.autopsy.casemodule.Case object to access all data related to this case.
\subsection design_overview_sub5 Ingest modules posting results
\section design_image Adding an Image
Ingest services, when running, produce data and write the data to the blackboard
in form of blackboard artifacts and associated blackboard attributes.
The services notify listeners of the availability of the data.
The default listener is the Autopsy directory tree UI component.
The component displays data currently saved in the blackboard and it also
refreshes the data view in real-time in response to service events.
Ingest services post interesting messages about the incoming data to Ingest Inbox.
After case is created, one or more disk images can be added to the case. There is a wizard to guide that process
and it is located in the org.sleuthkit.autopsy.casemodule package. Refer to the package section \ref casemodule_add_image for more details on the wizard.
\subsection design_overview_sub6 Result viewers (directory tree, table viewers, content viewers)
After image has been added to the case, the user can select one or more ingest modules to be executed on the image.
Ingest modules focus on a specific type of analysis task and run in the background. The results from the ingest module can be found in the results tree and in the ingest inbox.
The directory tree (in the left-hand panel of the Autopsy viewer)
is the results viewer for the results saved in the database during ingest process.
The org.sleuthkit.autopsy.ingest package provides the basic infrastructure for the ingest module management. See the \ref ingestmodue_contents page for more details.
A list of standard modules that come with Autopsy can be found in:
- org.sleuthkit.autopsy.keywordsearch
- org.sleuthkit.autopsy.recentactivity
- org.sleuthkit.autopsy.hashdatabase
- org.sleuthkit.autopsy.thunderbirdparser
See \ref ingestmodule_making for more details on making an ingest module.
\section design_view Viewing Results
The UI has three main areas. The tree on the left-hand side, the result viewers in the upper right, and the content viewers in the lower right. Data passes between these areas by encapsulating them in Node objects (see org.openide.nodes.Node). Nodes use property sheets to encapsulate data (blackboard attributes) and are modeled in a parent-child hierarchy with other nodes.
The hierarchy is used to visually represent the data and to trigger child node updates when the parent node is selected.
Node child factories are invoked by the Netbeans framework at the time of parent node selection to populate and refresh the child node view.
The tree on the left hand-side shows the analysis results. Its contents are populated from the central database. See the org.sleuthkit.autopsy.directorytree module for more details.
The area in the upper right is the result viewer area. When a node is selected from the tree, the data is sent to this area. It is a framework with modules that display the data in different layouts. The org.sleuthkit.autopsy.corecomponentsinterfaces package has the interface to make one of these modules.
When an item is selected from the result viewer area, it is passed to the bottom right content viewers. It too is a framework with many modules that know how to show information about a specific file in different ways. The org.sleuthkit.autopsy.corecomponentsinterfaces package has the interface to make one of these modules. See \ref contentViewer_page on building new content viewers.
<!-- @@@ MOVE THIS SOMEWHERE ELSE -- the directory tree package maybe??
The component is by default registered with the ingest manager as an ingest event listener.
The viewer first loads all the viewer-supported data currently in the blackboard when Autopsy starts.
@ -65,39 +55,33 @@ During the ingest process the viewer receives events from ingest services
When ingest is completed, the viewer responds to the final ingest data event generated by the ingest manager,
and performs a final refresh of all viewer-supported data in the blackboard.
Data presented is encapsulated in node objects (org.openide.nodes.Node) before it is displayed in the UI.
Nodes use property sheets to encapsulate data (blackboard attributes) and are modeled in a parent-child hierarchy with other nodes.
The hierarchy is used to visually represent the data and to trigger child node updates when the parent node is selected.
Node child factories are invoked by the Netbeans framework at the time of parent node selection to populate and refresh the child node view.
User normally initiates result navigation in the directory tree.
When a node is selected, it is passed in to the table result viewer (top-right).
When a node is selected in the table result viewer, it is passed in to the content viewers (bottom-right).
Node content support capabilities are registered in the node's Lookup.
Multiple content viewers (such as strings, hex, extracted text, media) can support the node content.
If multiple content viewers are supported, a preferred (default) content viewer is chosen.
For more information refer to org.sleuthkit.autopsy.corecomponents, org.sleuthkit.autopsy.corecomponentsinterfaces
and org.sleuthkit.autopsy.directorytree
and
-->
\section design_report Report generation
When ingest is complete, the user can generate reports. There is a reporting framework to enable many different formats. Autopsy currently comes with generic html, xml and Excel reports. See the org.sleuthkit.autopsy.report package for details on the framework and
\ref report_making for details on building a new report module.
\subsection design_overview_sub7 Report generation
When ingest is complete, user can generate reports (user can generate reports also during ingest - such report might not contain all results).
There are several types of reports implemented as submodules that are shipped with Autopsy core: generic html, xml and Excel reports.
Each reporting submodule implements org.sleuthkit.autopsy.report.ReportModule interface and registers itself in layer.xml
<!--Each reporting submodule implements org.sleuthkit.autopsy.report.ReportModule interface and registers itself in layer.xml
Reporting submodule typically interacts with 3 components:
- org.sleuthkit.autopsy.report.ReportConfiguration - to read current reporting configuration set by the user,
- Blackboard API in org.sleuthkit.datamodel.SleuthkitCase class - to traverse and read blackboard artifacts and attributes,
- an API (possibly external/thirdparty API) to convert blackboard artifacts data structures to the desired reporting format.
Autopsy reporting module is present in org.sleuthkit.autopsy.report package.
Please refer to report.dox and org.sleuthkit.autopsy.report package API documentation for more details on how to implement a custom reporting submodule.
-->
*/

View File

@ -1,175 +0,0 @@
/*! \page ingest_page Creating Ingest Modules
\section ingest_overview Overview
Autopsy provides ingest framework in org.sleuthkit.autopsy.ingest.
Ingest modules (ingest services) are designed to be pluggable into the ingest pipeline.
New modules can be added to the Autopsy ingest pipeline by dropping in jar files into build/cluster/modules.
Dropped in module will be automatically recognized next time Autopsy starts.
This document outlines steps to implement a functional ingest module.
\subsection ingest_interface Interfaces
Implement one of the interfaces:
- org.sleuthkit.autopsy.ingest.IngestServiceImage (for modules that are interested in the entire image, or selectively pick and analyze data from the image)
- org.sleuthkit.autopsy.ingest.IngestServiceAbstractFile (for modules that process every file in the image).
org.sleuthkit.autopsy.ingest.IngestServiceAbstract declares common methods for both types of services.
\subsection ingest_interface_details Implementation details.
Refer to org.sleuthkit.autopsy.ingest.example package source code for sample service code.
There is a static getDefault() method that is not part of the interface, that every module (whether an image or a file service)
needs to implement to return the registered static instance of the service.
Refer to example code in org.sleuthkit.autopsy.ingest.example.ExampleAbstractFileIngestService.getDefault()
A file ingest service requires a private constructor to ensure one and only (singleton) instance.
Ensure the default public file service constructor is overridden with the private one.
An image ingest service, requires a public constructor.
Most work is typically performed in process() method invoked by the ingest pipeline.
The method takes either a file or an image as an argument (depending on the type of the service).
If new data is produced in process() method, it will be written to the blackboard using the blackboard API in SleuthkitCase class.
Also, a data event will be generated, and inbox ingest message can be posted.
Services can alternatively enqueue work in process() for later processing (more common if the service manages internal threads).
init() method is invoked on a service (by ingest manager) every time ingest pipeline starts.
A service should support multiple invocations of init() throughout the application life-cycle.
complete() method is invoked on a service when the entire ingest completes.
The service should perform any resource (files, handles, caches) cleanup in this method and submit final results and post a final ingest inbox message.
stop() method is invoked on a service when ingest is interrupted (by the user or by the system).
The method implementation should be similar to complete(),
in that the service should perform any cleanup work. The common cleanup code for stop() and complete() can often be refactored.
If there is pending data to be processed or pending results to be reported by the service;
the results should be rejected and ignored if stop() is invoked and the service should terminate as early as possible.
Services should post inbox messages to the user when stop() or complete() is invoked (refer to the examples).
It is recommended to populate the description field of the complete inbox message to provide feedback to the user
summarizing the module ingest run and if any errors were encountered.
Every service should support multiple init() - process() - complete(), and init() - process() - stop() invocations.
The services should also support multiple init() - complete() and init() - stop() invocations,
which can occur if ingest pipeline is started but no work is enqueued for the particular service.
Module developers are encouraged to use the standard java.util.logging.Logger infrastructure to log errors to the Autopsy log.
\subsection ingest_registration Service Registration
Ingest service class / module should register itself using Netbeans Lookup infrastructure
in layer.xml file in the same package where the ingest module is located.
Example image ingest service registration:
<file name="org-sleuthkit-autopsy-ingest-example-ExampleImageIngestService.instance">
<attr name="instanceOf" stringvalue="org.sleuthkit.autopsy.ingest.IngestServiceImage"/>
<attr name="instanceCreate" methodvalue="org.sleuthkit.autopsy.ingest.example.ExampleImageIngestService.getDefault"/>
<attr name="position" intvalue="1000"/>
</file>
File image ingest service registration:
<file name="org-sleuthkit-autopsy-ingest-example-ExampleAbstractFileIngestService.instance">
<attr name="instanceOf" stringvalue="org.sleuthkit.autopsy.ingest.IngestServiceAbstractFile"/>
<attr name="instanceCreate" methodvalue="org.sleuthkit.autopsy.ingest.example.ExampleAbstractFileIngestService.getDefault"/>
<attr name="position" intvalue="1100"/>
</file>
Note the "position" attribute. The attribute determines the ordering of the module in the ingest pipeline.
Services with lower position attribute will execute earlier.
Use high numbers (higher than 1000) or non-core services.
If your module depends on results from another module, use a higher position attribute to enforce the dependency.
Note: we plan to implement a more flexible and robust module dependency system in future versions of the Autopsy ingest framework.
\subsection ingest_configuration Service Configuration
Ingest modules typically require configuration before they are executed and the ingest module framework
supports 2 levels of configuration: simple and advanced.
Simple configuration should present the most important and most frequently tuned ingest parameters.
Any additional parameters should be part of advanced configuration.
Module configuration is decentralized and module-specific; every module maintains its
own configuration state and is responsible for implementing its own JPanel to render
and present the configuration to the user.
JPanel implementation should support scrolling if the configuration widgets require
more real-estate than the parent container.
Configuration methods are declared in the ingest modules interfaces.
For example, to implement simple configuration, module should return true in:
org.sleuthkit.autopsy.ingest.IngestServiceAbstract.hasSimpleConfiguration()
org.sleuthkit.autopsy.ingest.IngestServiceAbstract.getSimpleConfiguration()
should then return javax.swing.JPanel instance.
To save the simple configuration state, the module should implement
org.sleuthkit.autopsy.ingest.IngestServiceAbstract.saveSimpleConfiguration()
\subsection file_ingest_return File Ingest Service Return Values
File ingest services are expected to return org.sleuthkit.autopsy.ingest.IngestServiceAbstractFile.ProcessResult from
the file service org.sleuthkit.autopsy.ingest.IngestServiceAbstractFile.process() method.
Service can communicate via the return value if it thinks subseqeuent services should continue processing the file,
whether the pipeline should terminate processing of the file, or whether it should be decided by the subsequent service (in which case the return value is used by the subscribed service as a hint).
The return value of every service that has already processed the file is stored in the pipeline
(by the ingest manager) for the duration of processing the file in the pipeline.
Any service interested in retrieving the return value from previously executed services for that file should use
org.sleuthkit.autopsy.ingest.IngestManagerProxy.getAbstractFileServiceResult() method and pass in the service name.
If the return value is not available for the service for the current file in the pipeline, org.sleuthkit.autopsy.ingest.IngestServiceAbstractFile.ProcessResult.UNKNOWN is returned.
\subsection ingest_events Sending Service Events and Posting Data
Service should notify listeners of new data available periodically by invoking org.sleuthkit.autopsy.ingest.IngestManagerProxy.fireServiceDataEvent() method.
The method accepts org.sleuthkit.autopsy.ingest.ServiceDataEvent parameter.
The artifacts passed in a single event should be of the same type,
which is enforced by the org.sleuthkit.autopsy.ingest.ServiceDataEvent constructor.
\subsection ingest_intervals Data Posting Intervals
The timing as to when a service posts results data is module-implementation-specific.
In a simple case, service may post new data as soon as the data is available -- likely
for simple services that take a relatively short amount of time to execute and new data is expected
to arrive in the order of seconds.
Another possibility is to post data in fixed time-intervals (e.g. for a service that takes minutes to produce results
and for a service that maintains internal threads to perform work).
There exist a global update setting that specifies maximum time interval for the service to post data.
User may adjust the interval for more frequent, real-time updates. Services that post data in periodic intervals should post their data according to this setting.
The setting is retrieved by the module using org.sleuthkit.autopsy.ingest.IngestManagerProxy.getUpdateFrequency() method.
\subsection ingest_messages Posting Inbox Messages
Ingest services should send ingest messages about interesting events to the user.
Examples of such events include service status (started, stopped) or information about new data.
The messages include the source service, message subject, message details, unique message id (in the context of the originating service)
and a uniqueness attribute, used to group similar messages together and to determine the overall importance priority of the message.
A message group with a higher number of aggregate messages with the same uniqueness is considered a lower priority.
Ingest messages have different types: there are info messages, warning messages, error messages and data messages.
The data messages contain encapsulated blackboard artifacts and attributes. The passed in data is used by the ingest inbox GUI widget to navigate to the artifact view in the directory tree, if requested by the user.
Ingest message API is defined in org.sleuthkit.autopsy.ingest.IngestMessage class. The class also contains factory methods to create new messages.
Messages are posted using org.sleuthkit.autopsy.ingest.IngestManagerProxy.postMessage() method, which accepts a message object created using one of the factory methods.
The recipient of the ingest messages is org.sleuthkit.autopsy.ingest.IngestMessageTopComponent. The messages are relayed by the ingest manager.
*/

View File

@ -4,9 +4,6 @@
Autopsy has been designed as a platform for open source tools besides just The Sleuth Kit. This document is for developers who want to add functionality into Autopsy. This could be in the form of enhancing the existing functionality or by making a module that plugs into it and you may distribute from your own site or push it back into the base distribution.
- \subpage design_page
- \subpage ingest_page
- \subpage contentViewer_page
- \subpage report_page
- \subpage regression_test_page
*/

View File

@ -1,8 +0,0 @@
/*! \page report_page Creating Report Modules
\section report_overview Overview
This page will talk about making report modules. We have not written it yet.
*/