diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000000..70e247e540 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,14 @@ +*.java text diff=java + +*.txt text +*.sh text +*.mf text +*.xml text +*.form text +*.properties text +*.html text diff=html +*.dox text +Doxyfile text + +*.py text diff=python +*.pl text diff --git a/0 b/0 deleted file mode 100755 index a88ffddb66..0000000000 Binary files a/0 and /dev/null differ diff --git a/API-CHANGES.txt b/API-CHANGES.txt index d2082bf901..1c268170a4 100644 --- a/API-CHANGES.txt +++ b/API-CHANGES.txt @@ -2,5 +2,8 @@ Changes to make to API when we are ready to make backward incompatible changes: - HTMLReport has special API for more context on columns and special handling in REportGenerator. Change all reports to the new API. - DataContentViewer.isPreferred does not need isSupported to be passed in -- DataContentViewerHex and STrings can have the public setDataView methods removed in favor of the new private ones -Content.getUniquePath() shoudl not thrown TskException. We should deal with it in the method. +- DataContentViewerHex and Strings can have the public setDataView methods removed in favor of the new private ones +- Content.getUniquePath() should not thrown TskException. We should deal with it in the method. +- Make the list of events that Case fires off to be part of an enum to group them together (like IngestManager does). +- Sub-modules in RecentActivity have a bunch of public/protected variables that do not need to be. (i.e. ExtractRegistry.rrFullFound). +- Delete BrowserType enum and BrowserActivityType in RecentActivity. diff --git a/BUILDING.txt b/BUILDING.txt index 7cba4cdc1e..5577925b21 100644 --- a/BUILDING.txt +++ b/BUILDING.txt @@ -11,8 +11,7 @@ correct C libraries. STEPS: 1) Get Java Setup -1a) Download and install 32-bit version of JDK version 1.7 (32-bit is currently -needed even if you have a 64-bit system). +1a) Download and install JDK version 1.7. You can now use 32-bit or 64-bit, but special work is needed to get The Sleuth Kit to compile as 64-bit. So, 32-bit is easier. Autopsy has been used and tested with Oracle JavaSE and the included JavaFX support (http://www.oracle.com/technetwork/java/javase/downloads/index.html). @@ -26,7 +25,8 @@ Note: Netbeans IDE is not required to build and run Autopsy, but it is a recommended IDE to use for development of Autopsy modules. 1d) (optional) If you are going to package Autopsy, then you'll also -need to set JRE_HOME to the root JRE directory. +need to set JRE_HOME_32 to the root 32-bit JRE directory and/or JRE_HOME_64 +to the root 64-bit JRE directory. 1e) (optional) For some Autopsy features to be functional, you need to add java executable to the system PATH. @@ -37,6 +37,9 @@ need to set JRE_HOME to the root JRE directory. later). All you need is the dll file. Note that you will get a launching error if you use libewf 1. - http://sourceforge.net/projects/libewf/ +If you want to build the 64-bit version of The Sleuth Kit, download +our 64-bit version of libewf: +- https://github.com/sleuthkit/libewf_64bit 2b) Set LIBEWF_HOME environment variable to root directory of LIBEWF @@ -97,13 +100,13 @@ BACKGROUND: Here are some notes to shed some light on what is going on during the build process. -- NetBeans uses ant to build Autopsy. The build target locates TSK -(and LIBEWF) based on the environment variables and copies the -needed JAR and library files into the DataModel module in the Autopsy -project (see build-unix.xml and build-windows.xml in the root -directory for details). If you want to use the debug version of -the TSK dll, then edit the copy line in the build-windows.xml file -to copy from the Debug folder. +- The Sleuth Kit Java datamodel JAR file has native libraries +that are copied into it. + +- NetBeans uses ant to build Autopsy. The build target copies the +TSK datamodel JAR file into the project. If you want to use the +debug version of the TSK dll, then there is a different ant target +in TSK to copy the debug versions of the dlls. - On a Windows system, the ant target copies all needed libraries to the autopsy folder. On a Unix system, the ant taget copies only diff --git a/Core/manifest.mf b/Core/manifest.mf index 31bfec73de..7aa34c46dc 100644 --- a/Core/manifest.mf +++ b/Core/manifest.mf @@ -1,10 +1,10 @@ -Manifest-Version: 1.0 -OpenIDE-Module: org.sleuthkit.autopsy.core/9 -OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/core/Bundle.properties -OpenIDE-Module-Layer: org/sleuthkit/autopsy/core/layer.xml -OpenIDE-Module-Implementation-Version: 9 -OpenIDE-Module-Requires: org.openide.windows.WindowManager, org.netbeans.api.javahelp.Help -AutoUpdate-Show-In-Client: true -AutoUpdate-Essential-Module: true -OpenIDE-Module-Install: org/sleuthkit/autopsy/core/Installer.class - +Manifest-Version: 1.0 +OpenIDE-Module: org.sleuthkit.autopsy.core/9 +OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/core/Bundle.properties +OpenIDE-Module-Layer: org/sleuthkit/autopsy/core/layer.xml +OpenIDE-Module-Implementation-Version: 9 +OpenIDE-Module-Requires: org.openide.windows.WindowManager, org.netbeans.api.javahelp.Help +AutoUpdate-Show-In-Client: true +AutoUpdate-Essential-Module: true +OpenIDE-Module-Install: org/sleuthkit/autopsy/core/Installer.class + diff --git a/Core/nbproject/project.xml b/Core/nbproject/project.xml index 9b49c8e266..88dd528aeb 100644 --- a/Core/nbproject/project.xml +++ b/Core/nbproject/project.xml @@ -191,6 +191,7 @@ + org.sleuthkit.autopsy.actions org.sleuthkit.autopsy.casemodule org.sleuthkit.autopsy.casemodule.services org.sleuthkit.autopsy.core diff --git a/Core/src/org/sleuthkit/autopsy/actions/AddBlackboardArtifactTagAction.java b/Core/src/org/sleuthkit/autopsy/actions/AddBlackboardArtifactTagAction.java new file mode 100755 index 0000000000..4e3efcad87 --- /dev/null +++ b/Core/src/org/sleuthkit/autopsy/actions/AddBlackboardArtifactTagAction.java @@ -0,0 +1,69 @@ +/* + * Autopsy Forensic Browser + * + * Copyright 2013 Basis Technology Corp. + * Contact: carrier sleuthkit org + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.sleuthkit.autopsy.actions; + +import java.util.Collection; +import java.util.logging.Level; +import javax.swing.JOptionPane; +import org.openide.util.Utilities; +import org.sleuthkit.autopsy.casemodule.Case; +import org.sleuthkit.autopsy.coreutils.Logger; +import org.sleuthkit.datamodel.TagName; +import org.sleuthkit.datamodel.BlackboardArtifact; +import org.sleuthkit.datamodel.TskCoreException; + +/** + * Instances of this Action allow users to apply tags to blackboard artifacts. + */ +public class AddBlackboardArtifactTagAction extends AddTagAction { + // This class is a singleton to support multi-selection of nodes, since + // org.openide.nodes.NodeOp.findActions(Node[] nodes) will only pick up an Action if every + // node in the array returns a reference to the same action object from Node.getActions(boolean). + private static AddBlackboardArtifactTagAction instance; + + public static synchronized AddBlackboardArtifactTagAction getInstance() { + if (null == instance) { + instance = new AddBlackboardArtifactTagAction(); + } + return instance; + } + + private AddBlackboardArtifactTagAction() { + super(""); + } + + @Override + protected String getActionDisplayName() { + return Utilities.actionsGlobalContext().lookupAll(BlackboardArtifact.class).size() > 1 ? "Tag Results" : "Tag Result"; + } + + @Override + protected void addTag(TagName tagName, String comment) { + Collection selectedArtifacts = Utilities.actionsGlobalContext().lookupAll(BlackboardArtifact.class); + for (BlackboardArtifact artifact : selectedArtifacts) { + try { + Case.getCurrentCase().getServices().getTagsManager().addBlackboardArtifactTag(artifact, tagName, comment); + } + catch (TskCoreException ex) { + Logger.getLogger(AddBlackboardArtifactTagAction.class.getName()).log(Level.SEVERE, "Error tagging result", ex); + JOptionPane.showMessageDialog(null, "Unable to tag " + artifact.getDisplayName() + ".", "Tagging Error", JOptionPane.ERROR_MESSAGE); + } + } + } +} diff --git a/Core/src/org/sleuthkit/autopsy/actions/AddContentTagAction.java b/Core/src/org/sleuthkit/autopsy/actions/AddContentTagAction.java new file mode 100755 index 0000000000..8760ed364f --- /dev/null +++ b/Core/src/org/sleuthkit/autopsy/actions/AddContentTagAction.java @@ -0,0 +1,99 @@ +/* + * Autopsy Forensic Browser + * + * Copyright 2013 Basis Technology Corp. + * Contact: carrier sleuthkit org + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.sleuthkit.autopsy.actions; + +import java.util.Collection; +import java.util.logging.Level; +import javax.swing.JOptionPane; +import org.openide.util.Utilities; +import org.sleuthkit.autopsy.casemodule.Case; +import org.sleuthkit.autopsy.coreutils.Logger; +import org.sleuthkit.datamodel.AbstractFile; +import org.sleuthkit.datamodel.Content; +import org.sleuthkit.datamodel.TagName; +import org.sleuthkit.datamodel.TskCoreException; + +/** + * Instances of this Action allow users to apply tags to content. + */ +public class AddContentTagAction extends AddTagAction { + // This class is a singleton to support multi-selection of nodes, since + // org.openide.nodes.NodeOp.findActions(Node[] nodes) will only pick up an Action if every + // node in the array returns a reference to the same action object from Node.getActions(boolean). + private static AddContentTagAction instance; + + public static synchronized AddContentTagAction getInstance() { + if (null == instance) { + instance = new AddContentTagAction(); + } + return instance; + } + + private AddContentTagAction() { + super(""); + } + + @Override + protected String getActionDisplayName() { + return Utilities.actionsGlobalContext().lookupAll(AbstractFile.class).size() > 1 ? "Tag Files" : "Tag File"; + } + + @Override + protected void addTag(TagName tagName, String comment) { + Collection selectedFiles = Utilities.actionsGlobalContext().lookupAll(AbstractFile.class); + for (AbstractFile file : selectedFiles) { + try { + // Handle the special cases of current (".") and parent ("..") directory entries. + if (file.getName().equals(".")) { + Content parentFile = file.getParent(); + if (parentFile instanceof AbstractFile) { + file = (AbstractFile)parentFile; + } + else { + JOptionPane.showMessageDialog(null, "Unable to tag " + parentFile.getName() + ", not a regular file.", "Cannot Apply Tag", JOptionPane.WARNING_MESSAGE); + continue; + } + } + else if (file.getName().equals("..")) { + Content parentFile = file.getParent(); + if (parentFile instanceof AbstractFile) { + parentFile = (AbstractFile)((AbstractFile)parentFile).getParent(); + if (parentFile instanceof AbstractFile) { + file = (AbstractFile)parentFile; + } + else { + JOptionPane.showMessageDialog(null, "Unable to tag " + parentFile.getName() + ", not a regular file.", "Cannot Apply Tag", JOptionPane.WARNING_MESSAGE); + continue; + } + } + else { + JOptionPane.showMessageDialog(null, "Unable to tag " + parentFile.getName() + ", not a regular file.", "Cannot Apply Tag", JOptionPane.WARNING_MESSAGE); + continue; + } + } + + Case.getCurrentCase().getServices().getTagsManager().addContentTag(file, tagName, comment); + } + catch (TskCoreException ex) { + Logger.getLogger(AddContentTagAction.class.getName()).log(Level.SEVERE, "Error tagging result", ex); + JOptionPane.showMessageDialog(null, "Unable to tag " + file.getName() + ".", "Tagging Error", JOptionPane.ERROR_MESSAGE); + } + } + } +} \ No newline at end of file diff --git a/Core/src/org/sleuthkit/autopsy/actions/AddTagAction.java b/Core/src/org/sleuthkit/autopsy/actions/AddTagAction.java new file mode 100755 index 0000000000..65f6a5e589 --- /dev/null +++ b/Core/src/org/sleuthkit/autopsy/actions/AddTagAction.java @@ -0,0 +1,148 @@ +/* + * Autopsy Forensic Browser + * + * Copyright 2013 Basis Technology Corp. + * Contact: carrier sleuthkit org + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.sleuthkit.autopsy.actions; + +import java.awt.event.ActionEvent; +import java.awt.event.ActionListener; +import java.util.List; +import java.util.logging.Level; +import javax.swing.JMenu; +import javax.swing.JMenuItem; +import org.openide.util.actions.Presenter; +import org.sleuthkit.autopsy.casemodule.Case; +import org.sleuthkit.autopsy.casemodule.services.TagsManager; +import org.sleuthkit.datamodel.TagName; +import org.sleuthkit.datamodel.TskCoreException; +import org.sleuthkit.autopsy.coreutils.Logger; + +/** + * An abstract base class for Actions that allow users to tag SleuthKit data + * model objects. + */ +abstract class AddTagAction extends TagAction implements Presenter.Popup { + private static final String NO_COMMENT = ""; + + AddTagAction(String menuText) { + super(menuText); + } + + @Override + public JMenuItem getPopupPresenter() { + return new TagMenu(); + } + + @Override + protected void doAction(ActionEvent event) { + } + + /** + * Template method to allow derived classes to provide a string for for a + * menu item label. + */ + abstract protected String getActionDisplayName(); + + /** + * Template method to allow derived classes to add the indicated tag and + * comment to one or more a SleuthKit data model objects. + */ + abstract protected void addTag(TagName tagName, String comment); + + /** + * Instances of this class implement a context menu user interface for + * creating or selecting a tag name for a tag and specifying an optional tag + * comment. + */ + // @@@ This user interface has some significant usability issues and needs + // to be reworked. + private class TagMenu extends JMenu { + TagMenu() { + super(getActionDisplayName()); + + // Get the current set of tag names. + TagsManager tagsManager = Case.getCurrentCase().getServices().getTagsManager(); + List tagNames = null; + try { + tagNames = tagsManager.getAllTagNames(); + } + catch (TskCoreException ex) { + Logger.getLogger(TagsManager.class.getName()).log(Level.SEVERE, "Failed to get tag names", ex); + } + + // Create a "Quick Tag" sub-menu. + JMenu quickTagMenu = new JMenu("Quick Tag"); + add(quickTagMenu); + + // Each tag name in the current set of tags gets its own menu item in + // the "Quick Tags" sub-menu. Selecting one of these menu items adds + // a tag with the associated tag name. + if (null != tagNames && !tagNames.isEmpty()) { + for (final TagName tagName : tagNames) { + JMenuItem tagNameItem = new JMenuItem(tagName.getDisplayName()); + tagNameItem.addActionListener(new ActionListener() { + @Override + public void actionPerformed(ActionEvent e) { + addTag(tagName, NO_COMMENT); + refreshDirectoryTree(); + } + }); + quickTagMenu.add(tagNameItem); + } + } + else { + JMenuItem empty = new JMenuItem("No tags"); + empty.setEnabled(false); + quickTagMenu.add(empty); + } + + quickTagMenu.addSeparator(); + + // The "Quick Tag" menu also gets an "Choose Tag..." menu item. + // Selecting this item initiates a dialog that can be used to create + // or select a tag name and adds a tag with the resulting name. + JMenuItem newTagMenuItem = new JMenuItem("New Tag..."); + newTagMenuItem.addActionListener(new ActionListener() { + @Override + public void actionPerformed(ActionEvent e) { + TagName tagName = GetTagNameDialog.doDialog(); + if (tagName != null) { + addTag(tagName, NO_COMMENT); + refreshDirectoryTree(); + } + } + }); + quickTagMenu.add(newTagMenuItem); + + // Create a "Choose Tag and Comment..." menu item. Selecting this item initiates + // a dialog that can be used to create or select a tag name with an + // optional comment and adds a tag with the resulting name. + JMenuItem tagAndCommentItem = new JMenuItem("Tag and Comment..."); + tagAndCommentItem.addActionListener(new ActionListener() { + @Override + public void actionPerformed(ActionEvent e) { + GetTagNameAndCommentDialog.TagNameAndComment tagNameAndComment = GetTagNameAndCommentDialog.doDialog(); + if (null != tagNameAndComment) { + addTag(tagNameAndComment.getTagName(), tagNameAndComment.getComment()); + refreshDirectoryTree(); + } + } + }); + add(tagAndCommentItem); + } + } +} \ No newline at end of file diff --git a/Core/src/org/sleuthkit/autopsy/actions/Bundle.properties b/Core/src/org/sleuthkit/autopsy/actions/Bundle.properties new file mode 100755 index 0000000000..ad0c347ecb --- /dev/null +++ b/Core/src/org/sleuthkit/autopsy/actions/Bundle.properties @@ -0,0 +1,16 @@ +GetTagNameDialog.tagNameField.text= +GetTagNameDialog.cancelButton.text=Cancel +GetTagNameDialog.okButton.text=OK +GetTagNameDialog.preexistingLabel.text=Pre-existing Tags: +GetTagNameDialog.newTagPanel.border.title=New Tag +GetTagNameDialog.tagNameLabel.text=Tag Name: +GetTagNameAndCommentDialog.newTagButton.text=New Tag +GetTagNameAndCommentDialog.okButton.text=OK +GetTagNameAndCommentDialog.commentText.toolTipText=Enter an optional tag comment or leave blank +GetTagNameAndCommentDialog.commentText.text= +GetTagNameAndCommentDialog.commentLabel.text=Comment: +# To change this template, choose Tools | Templates +# and open the template in the editor. +GetTagNameAndCommentDialog.cancelButton.text=Cancel +GetTagNameAndCommentDialog.tagCombo.toolTipText=Select tag to use +GetTagNameAndCommentDialog.tagLabel.text=Tag: diff --git a/Core/src/org/sleuthkit/autopsy/actions/DeleteBlackboardArtifactTagAction.java b/Core/src/org/sleuthkit/autopsy/actions/DeleteBlackboardArtifactTagAction.java new file mode 100755 index 0000000000..3899b09c82 --- /dev/null +++ b/Core/src/org/sleuthkit/autopsy/actions/DeleteBlackboardArtifactTagAction.java @@ -0,0 +1,67 @@ +/* + * Autopsy Forensic Browser + * + * Copyright 2013 Basis Technology Corp. + * Contact: carrier sleuthkit org + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.sleuthkit.autopsy.actions; + +import java.awt.event.ActionEvent; +import java.util.Collection; +import java.util.logging.Level; +import java.util.logging.Logger; +import javax.swing.JOptionPane; +import org.openide.util.Utilities; +import org.sleuthkit.autopsy.casemodule.Case; +import org.sleuthkit.datamodel.BlackboardArtifactTag; +import org.sleuthkit.datamodel.TskCoreException; + +/** + * Instances of this Action allow users to delete tags applied to blackboard artifacts. + */ +public class DeleteBlackboardArtifactTagAction extends TagAction { + private static final String MENU_TEXT = "Delete Tag(s)"; + + // This class is a singleton to support multi-selection of nodes, since + // org.openide.nodes.NodeOp.findActions(Node[] nodes) will only pick up an Action if every + // node in the array returns a reference to the same action object from Node.getActions(boolean). + private static DeleteBlackboardArtifactTagAction instance; + + public static synchronized DeleteBlackboardArtifactTagAction getInstance() { + if (null == instance) { + instance = new DeleteBlackboardArtifactTagAction(); + } + return instance; + } + + private DeleteBlackboardArtifactTagAction() { + super(MENU_TEXT); + } + + @Override + protected void doAction(ActionEvent event) { + Collection selectedTags = Utilities.actionsGlobalContext().lookupAll(BlackboardArtifactTag.class); + for (BlackboardArtifactTag tag : selectedTags) { + try { + Case.getCurrentCase().getServices().getTagsManager().deleteBlackboardArtifactTag(tag); + } + catch (TskCoreException ex) { + Logger.getLogger(AddContentTagAction.class.getName()).log(Level.SEVERE, "Error deleting tag", ex); + JOptionPane.showMessageDialog(null, "Unable to delete tag " + tag.getName() + ".", "Tag Deletion Error", JOptionPane.ERROR_MESSAGE); + } + } + } +} + diff --git a/Core/src/org/sleuthkit/autopsy/actions/DeleteContentTagAction.java b/Core/src/org/sleuthkit/autopsy/actions/DeleteContentTagAction.java new file mode 100755 index 0000000000..6f4bfd42a1 --- /dev/null +++ b/Core/src/org/sleuthkit/autopsy/actions/DeleteContentTagAction.java @@ -0,0 +1,66 @@ +/* + * Autopsy Forensic Browser + * + * Copyright 2013 Basis Technology Corp. + * Contact: carrier sleuthkit org + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.sleuthkit.autopsy.actions; + +import java.awt.event.ActionEvent; +import java.util.Collection; +import java.util.logging.Level; +import java.util.logging.Logger; +import javax.swing.JOptionPane; +import org.openide.util.Utilities; +import org.sleuthkit.autopsy.casemodule.Case; +import org.sleuthkit.datamodel.ContentTag; +import org.sleuthkit.datamodel.TskCoreException; + +/** + * Instances of this Action allow users to delete tags applied to content. + */ +public class DeleteContentTagAction extends TagAction { + private static final String MENU_TEXT = "Delete Tag(s)"; + + // This class is a singleton to support multi-selection of nodes, since + // org.openide.nodes.NodeOp.findActions(Node[] nodes) will only pick up an Action if every + // node in the array returns a reference to the same action object from Node.getActions(boolean). + private static DeleteContentTagAction instance; + + public static synchronized DeleteContentTagAction getInstance() { + if (null == instance) { + instance = new DeleteContentTagAction(); + } + return instance; + } + + private DeleteContentTagAction() { + super(MENU_TEXT); + } + + @Override + protected void doAction(ActionEvent e) { + Collection selectedTags = Utilities.actionsGlobalContext().lookupAll(ContentTag.class); + for (ContentTag tag : selectedTags) { + try { + Case.getCurrentCase().getServices().getTagsManager().deleteContentTag(tag); + } + catch (TskCoreException ex) { + Logger.getLogger(AddContentTagAction.class.getName()).log(Level.SEVERE, "Error deleting tag", ex); + JOptionPane.showMessageDialog(null, "Unable to delete tag " + tag.getName() + ".", "Tag Deletion Error", JOptionPane.ERROR_MESSAGE); + } + } + } +} diff --git a/Core/src/org/sleuthkit/autopsy/directorytree/TagAndCommentDialog.form b/Core/src/org/sleuthkit/autopsy/actions/GetTagNameAndCommentDialog.form similarity index 81% rename from Core/src/org/sleuthkit/autopsy/directorytree/TagAndCommentDialog.form rename to Core/src/org/sleuthkit/autopsy/actions/GetTagNameAndCommentDialog.form index 1bacfb8942..cbbdaebb26 100644 --- a/Core/src/org/sleuthkit/autopsy/directorytree/TagAndCommentDialog.form +++ b/Core/src/org/sleuthkit/autopsy/actions/GetTagNameAndCommentDialog.form @@ -78,7 +78,7 @@ - + @@ -91,7 +91,7 @@ - + @@ -104,7 +104,7 @@ - + @@ -114,31 +114,31 @@ - + - + - + - + - + diff --git a/Core/src/org/sleuthkit/autopsy/directorytree/TagAndCommentDialog.java b/Core/src/org/sleuthkit/autopsy/actions/GetTagNameAndCommentDialog.java similarity index 70% rename from Core/src/org/sleuthkit/autopsy/directorytree/TagAndCommentDialog.java rename to Core/src/org/sleuthkit/autopsy/actions/GetTagNameAndCommentDialog.java index 4d00770205..e953694d90 100644 --- a/Core/src/org/sleuthkit/autopsy/directorytree/TagAndCommentDialog.java +++ b/Core/src/org/sleuthkit/autopsy/actions/GetTagNameAndCommentDialog.java @@ -16,11 +16,13 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package org.sleuthkit.autopsy.directorytree; +package org.sleuthkit.autopsy.actions; import java.awt.event.ActionEvent; import java.awt.event.KeyEvent; -import java.util.TreeSet; +import java.util.HashMap; +import java.util.List; +import java.util.logging.Level; import javax.swing.AbstractAction; import javax.swing.ActionMap; import javax.swing.InputMap; @@ -29,28 +31,28 @@ import javax.swing.JDialog; import javax.swing.JFrame; import javax.swing.KeyStroke; import org.openide.windows.WindowManager; -import org.sleuthkit.autopsy.datamodel.Tags; +import org.sleuthkit.autopsy.casemodule.Case; +import org.sleuthkit.autopsy.casemodule.services.TagsManager; +import org.sleuthkit.autopsy.coreutils.Logger; +import org.sleuthkit.datamodel.TagName; +import org.sleuthkit.datamodel.TskCoreException; -/** - * Tag dialog for tagging files and results. User enters an optional comment. - */ -public class TagAndCommentDialog extends JDialog { +public class GetTagNameAndCommentDialog extends JDialog { + private static final String NO_TAG_NAMES_MESSAGE = "No Tags"; + private final HashMap tagNames = new HashMap<>(); + private TagNameAndComment tagNameAndComment = null; - private static final String NO_TAG_MESSAGE = "No Tags"; - private String tagName = ""; - private String comment = ""; - - public static class CommentedTag { - private String name; + public static class TagNameAndComment { + private TagName tagName; private String comment; - CommentedTag(String name, String comment) { - this.name = name; + private TagNameAndComment(TagName tagName, String comment) { + this.tagName = tagName; this.comment = comment; } - public String getName() { - return name; + public TagName getTagName() { + return tagName; } public String getComment() { @@ -58,25 +60,16 @@ public class TagAndCommentDialog extends JDialog { } } - public static CommentedTag doDialog() { - TagAndCommentDialog dialog = new TagAndCommentDialog(); - if (!dialog.tagName.isEmpty()) { - return new CommentedTag(dialog.tagName, dialog.comment); - } - else { - return null; - } + public static TagNameAndComment doDialog() { + GetTagNameAndCommentDialog dialog = new GetTagNameAndCommentDialog(); + return dialog.tagNameAndComment; } - /** - * Creates new form TagDialog - */ - private TagAndCommentDialog() { - super((JFrame)WindowManager.getDefault().getMainWindow(), "Tag and Comment", true); - + private GetTagNameAndCommentDialog() { + super((JFrame)WindowManager.getDefault().getMainWindow(), "Create Tag", true); initComponents(); - // Close the dialog when Esc is pressed + // Set up the dialog to close when Esc is pressed. String cancelName = "cancel"; InputMap inputMap = getRootPane().getInputMap(JComponent.WHEN_ANCESTOR_OF_FOCUSED_COMPONENT); inputMap.put(KeyStroke.getKeyStroke(KeyEvent.VK_ESCAPE, 0), cancelName); @@ -87,24 +80,30 @@ public class TagAndCommentDialog extends JDialog { dispose(); } }); - - // get the current list of tag names - TreeSet tags = Tags.getAllTagNames(); - - // if there are no tags, add the NO_TAG_MESSAGE - if (tags.isEmpty()) { - tags.add(NO_TAG_MESSAGE); + + // Populate the combo box with the available tag names and save the + // tag name DTOs to be enable to return the one the user selects. + TagsManager tagsManager = Case.getCurrentCase().getServices().getTagsManager(); + List currentTagNames = null; + try { + currentTagNames = tagsManager.getAllTagNames(); } - - // add the tags to the combo box - for (String tag : tags) { - tagCombo.addItem(tag); + catch (TskCoreException ex) { + Logger.getLogger(GetTagNameAndCommentDialog.class.getName()).log(Level.SEVERE, "Failed to get tag names", ex); + } + if (null != currentTagNames && currentTagNames.isEmpty()) { + tagCombo.addItem(NO_TAG_NAMES_MESSAGE); + } + else { + for (TagName tagName : currentTagNames) { + tagNames.put(tagName.getDisplayName(), tagName); + tagCombo.addItem(tagName.getDisplayName()); + } } - //center it - this.setLocationRelativeTo(WindowManager.getDefault().getMainWindow()); - - setVisible(true); // blocks + // Center and show the dialog box. + this.setLocationRelativeTo(WindowManager.getDefault().getMainWindow()); + setVisible(true); } /** @@ -130,30 +129,30 @@ public class TagAndCommentDialog extends JDialog { } }); - org.openide.awt.Mnemonics.setLocalizedText(okButton, org.openide.util.NbBundle.getMessage(TagAndCommentDialog.class, "TagAndCommentDialog.okButton.text")); // NOI18N + org.openide.awt.Mnemonics.setLocalizedText(okButton, org.openide.util.NbBundle.getMessage(GetTagNameAndCommentDialog.class, "GetTagNameAndCommentDialog.okButton.text")); // NOI18N okButton.addActionListener(new java.awt.event.ActionListener() { public void actionPerformed(java.awt.event.ActionEvent evt) { okButtonActionPerformed(evt); } }); - org.openide.awt.Mnemonics.setLocalizedText(cancelButton, org.openide.util.NbBundle.getMessage(TagAndCommentDialog.class, "TagAndCommentDialog.cancelButton.text")); // NOI18N + org.openide.awt.Mnemonics.setLocalizedText(cancelButton, org.openide.util.NbBundle.getMessage(GetTagNameAndCommentDialog.class, "GetTagNameAndCommentDialog.cancelButton.text")); // NOI18N cancelButton.addActionListener(new java.awt.event.ActionListener() { public void actionPerformed(java.awt.event.ActionEvent evt) { cancelButtonActionPerformed(evt); } }); - tagCombo.setToolTipText(org.openide.util.NbBundle.getMessage(TagAndCommentDialog.class, "TagAndCommentDialog.tagCombo.toolTipText")); // NOI18N + tagCombo.setToolTipText(org.openide.util.NbBundle.getMessage(GetTagNameAndCommentDialog.class, "GetTagNameAndCommentDialog.tagCombo.toolTipText")); // NOI18N - org.openide.awt.Mnemonics.setLocalizedText(tagLabel, org.openide.util.NbBundle.getMessage(TagAndCommentDialog.class, "TagAndCommentDialog.tagLabel.text")); // NOI18N + org.openide.awt.Mnemonics.setLocalizedText(tagLabel, org.openide.util.NbBundle.getMessage(GetTagNameAndCommentDialog.class, "GetTagNameAndCommentDialog.tagLabel.text")); // NOI18N - org.openide.awt.Mnemonics.setLocalizedText(commentLabel, org.openide.util.NbBundle.getMessage(TagAndCommentDialog.class, "TagAndCommentDialog.commentLabel.text")); // NOI18N + org.openide.awt.Mnemonics.setLocalizedText(commentLabel, org.openide.util.NbBundle.getMessage(GetTagNameAndCommentDialog.class, "GetTagNameAndCommentDialog.commentLabel.text")); // NOI18N - commentText.setText(org.openide.util.NbBundle.getMessage(TagAndCommentDialog.class, "TagAndCommentDialog.commentText.text")); // NOI18N - commentText.setToolTipText(org.openide.util.NbBundle.getMessage(TagAndCommentDialog.class, "TagAndCommentDialog.commentText.toolTipText")); // NOI18N + commentText.setText(org.openide.util.NbBundle.getMessage(GetTagNameAndCommentDialog.class, "GetTagNameAndCommentDialog.commentText.text")); // NOI18N + commentText.setToolTipText(org.openide.util.NbBundle.getMessage(GetTagNameAndCommentDialog.class, "GetTagNameAndCommentDialog.commentText.toolTipText")); // NOI18N - org.openide.awt.Mnemonics.setLocalizedText(newTagButton, org.openide.util.NbBundle.getMessage(TagAndCommentDialog.class, "TagAndCommentDialog.newTagButton.text")); // NOI18N + org.openide.awt.Mnemonics.setLocalizedText(newTagButton, org.openide.util.NbBundle.getMessage(GetTagNameAndCommentDialog.class, "GetTagNameAndCommentDialog.newTagButton.text")); // NOI18N newTagButton.addActionListener(new java.awt.event.ActionListener() { public void actionPerformed(java.awt.event.ActionEvent evt) { newTagButtonActionPerformed(evt); @@ -212,27 +211,26 @@ public class TagAndCommentDialog extends JDialog { }// //GEN-END:initComponents private void okButtonActionPerformed(java.awt.event.ActionEvent evt) {//GEN-FIRST:event_okButtonActionPerformed - tagName = (String)tagCombo.getSelectedItem(); - comment = commentText.getText(); + tagNameAndComment = new TagNameAndComment(tagNames.get((String)tagCombo.getSelectedItem()), commentText.getText()); dispose(); }//GEN-LAST:event_okButtonActionPerformed private void cancelButtonActionPerformed(java.awt.event.ActionEvent evt) {//GEN-FIRST:event_cancelButtonActionPerformed + tagNameAndComment = null; dispose(); }//GEN-LAST:event_cancelButtonActionPerformed - /** - * Closes the dialog - */ private void closeDialog(java.awt.event.WindowEvent evt) {//GEN-FIRST:event_closeDialog + tagNameAndComment = null; dispose(); }//GEN-LAST:event_closeDialog private void newTagButtonActionPerformed(java.awt.event.ActionEvent evt) {//GEN-FIRST:event_newTagButtonActionPerformed - String newTagName = CreateTagDialog.getNewTagNameDialog(null); + TagName newTagName = GetTagNameDialog.doDialog(); if (newTagName != null) { - tagCombo.addItem(newTagName); - tagCombo.setSelectedItem(newTagName); + tagNames.put(newTagName.getDisplayName(), newTagName); + tagCombo.addItem(newTagName.getDisplayName()); + tagCombo.setSelectedItem(newTagName.getDisplayName()); } }//GEN-LAST:event_newTagButtonActionPerformed diff --git a/Core/src/org/sleuthkit/autopsy/directorytree/CreateTagDialog.form b/Core/src/org/sleuthkit/autopsy/actions/GetTagNameDialog.form similarity index 87% rename from Core/src/org/sleuthkit/autopsy/directorytree/CreateTagDialog.form rename to Core/src/org/sleuthkit/autopsy/actions/GetTagNameDialog.form index 1136325546..a281ea606e 100644 --- a/Core/src/org/sleuthkit/autopsy/directorytree/CreateTagDialog.form +++ b/Core/src/org/sleuthkit/autopsy/actions/GetTagNameDialog.form @@ -72,7 +72,7 @@ - + @@ -82,7 +82,7 @@ - + @@ -124,7 +124,7 @@ - + @@ -133,7 +133,7 @@ - + @@ -168,14 +168,14 @@ - + - + diff --git a/Core/src/org/sleuthkit/autopsy/directorytree/CreateTagDialog.java b/Core/src/org/sleuthkit/autopsy/actions/GetTagNameDialog.java similarity index 64% rename from Core/src/org/sleuthkit/autopsy/directorytree/CreateTagDialog.java rename to Core/src/org/sleuthkit/autopsy/actions/GetTagNameDialog.java index 8fb571cf06..fb0d50ddc4 100644 --- a/Core/src/org/sleuthkit/autopsy/directorytree/CreateTagDialog.java +++ b/Core/src/org/sleuthkit/autopsy/actions/GetTagNameDialog.java @@ -16,72 +16,130 @@ * See the License for the specific language governing permissions and * limitations under the License. */ -package org.sleuthkit.autopsy.directorytree; +package org.sleuthkit.autopsy.actions; -import java.awt.Dimension; -import java.awt.Toolkit; +import java.awt.event.ActionEvent; import java.awt.event.KeyEvent; import java.util.ArrayList; +import java.util.HashMap; import java.util.List; +import java.util.logging.Level; +import javax.swing.AbstractAction; +import javax.swing.ActionMap; +import javax.swing.InputMap; +import javax.swing.JComponent; import javax.swing.JDialog; import javax.swing.JFrame; import javax.swing.JOptionPane; +import javax.swing.KeyStroke; import javax.swing.table.AbstractTableModel; import org.openide.util.ImageUtilities; -import org.sleuthkit.autopsy.datamodel.Tags; +import org.openide.windows.WindowManager; +import org.sleuthkit.autopsy.casemodule.Case; +import org.sleuthkit.autopsy.casemodule.services.TagsManager; +import org.sleuthkit.autopsy.coreutils.Logger; +import org.sleuthkit.datamodel.TagName; +import org.sleuthkit.datamodel.TskCoreException; -public class CreateTagDialog extends JDialog { +public class GetTagNameDialog extends JDialog { private static final String TAG_ICON_PATH = "org/sleuthkit/autopsy/images/tag-folder-blue-icon-16.png"; - private static String newTagName; + private final HashMap tagNames = new HashMap<>(); + private TagName tagName = null; - /** - * Creates new form CreateTagDialog - */ - private CreateTagDialog(JFrame parent) { - super(parent, true); - init(); - } + public static TagName doDialog() { + GetTagNameDialog dialog = new GetTagNameDialog(); + return dialog.tagName; + } - public static String getNewTagNameDialog(JFrame parent) { - new CreateTagDialog(parent); - return newTagName; - } - - private void init() { - - setTitle("Create a new tag"); - + private GetTagNameDialog() { + super((JFrame)WindowManager.getDefault().getMainWindow(), "Create Tag", true); + setIconImage(ImageUtilities.loadImage(TAG_ICON_PATH)); initComponents(); - tagsTable.setModel(new TagsTableModel()); + // Set up the dialog to close when Esc is pressed. + String cancelName = "cancel"; + InputMap inputMap = getRootPane().getInputMap(JComponent.WHEN_ANCESTOR_OF_FOCUSED_COMPONENT); + inputMap.put(KeyStroke.getKeyStroke(KeyEvent.VK_ESCAPE, 0), cancelName); + ActionMap actionMap = getRootPane().getActionMap(); + actionMap.put(cancelName, new AbstractAction() { + @Override + public void actionPerformed(ActionEvent e) { + dispose(); + } + }); + + // Get the current set of tag names and hash them for a speedy lookup in + // case the user chooses an existing tag name from the tag names table. + TagsManager tagsManager = Case.getCurrentCase().getServices().getTagsManager(); + List currentTagNames = null; + try { + currentTagNames = tagsManager.getAllTagNames(); + } + catch (TskCoreException ex) { + Logger.getLogger(GetTagNameDialog.class.getName()).log(Level.SEVERE, "Failed to get tag names", ex); + } + if (null != currentTagNames) { + for (TagName name : currentTagNames) { + this.tagNames.put(name.getDisplayName(), name); + } + } + else { + currentTagNames = new ArrayList<>(); + } + + // Populate the tag names table. + tagsTable.setModel(new TagsTableModel(currentTagNames)); tagsTable.setTableHeader(null); - - //completely disable selections tagsTable.setCellSelectionEnabled(false); tagsTable.setFocusable(false); tagsTable.setRowHeight(tagsTable.getRowHeight() + 5); - - setIconImage(ImageUtilities.loadImage(TAG_ICON_PATH)); - - Dimension screenDimension = Toolkit.getDefaultToolkit().getScreenSize(); - // set the popUp window / JFrame - int w = this.getSize().width; - int h = this.getSize().height; - - // set the location of the popUp Window on the center of the screen - setLocation((screenDimension.width - w) / 2, (screenDimension.height - h) / 2); - setVisible(true); //blocks + + // Center and show the dialog box. + this.setLocationRelativeTo(WindowManager.getDefault().getMainWindow()); + setVisible(true); } private boolean containsIllegalCharacters(String content) { - if ((content.contains("\\") || content.contains(":") || content.contains("*") - || content.contains("?") || content.contains("\"") || content.contains("<") - || content.contains(">") || content.contains("|"))) { - return true; - } - return false; + return (content.contains("\\")|| + content.contains(":") || + content.contains("*") || + content.contains("?") || + content.contains("\"")|| + content.contains("<") || + content.contains(">") || + content.contains("|")); } + private class TagsTableModel extends AbstractTableModel { + private final ArrayList tagNames = new ArrayList<>(); + + TagsTableModel(List tagNames) { + for (TagName tagName : tagNames) { + this.tagNames.add(tagName); + } + } + + @Override + public int getRowCount() { + return tagNames.size(); + } + + @Override + public boolean isCellEditable(int rowIndex, int columnIndex) { + return false; + } + + @Override + public int getColumnCount() { + return 1; + } + + @Override + public String getValueAt(int rowIndex, int columnIndex) { + return tagNames.get(rowIndex).getDisplayName(); + } + } + /** * This method is called from within the constructor to initialize the form. * WARNING: Do NOT modify this code. The content of this method is always @@ -107,14 +165,14 @@ public class CreateTagDialog extends JDialog { } }); - org.openide.awt.Mnemonics.setLocalizedText(cancelButton, org.openide.util.NbBundle.getMessage(CreateTagDialog.class, "CreateTagDialog.cancelButton.text")); // NOI18N + org.openide.awt.Mnemonics.setLocalizedText(cancelButton, org.openide.util.NbBundle.getMessage(GetTagNameDialog.class, "GetTagNameDialog.cancelButton.text")); // NOI18N cancelButton.addActionListener(new java.awt.event.ActionListener() { public void actionPerformed(java.awt.event.ActionEvent evt) { cancelButtonActionPerformed(evt); } }); - org.openide.awt.Mnemonics.setLocalizedText(okButton, org.openide.util.NbBundle.getMessage(CreateTagDialog.class, "CreateTagDialog.okButton.text")); // NOI18N + org.openide.awt.Mnemonics.setLocalizedText(okButton, org.openide.util.NbBundle.getMessage(GetTagNameDialog.class, "GetTagNameDialog.okButton.text")); // NOI18N okButton.addActionListener(new java.awt.event.ActionListener() { public void actionPerformed(java.awt.event.ActionEvent evt) { okButtonActionPerformed(evt); @@ -137,13 +195,13 @@ public class CreateTagDialog extends JDialog { tagsTable.setTableHeader(null); jScrollPane1.setViewportView(tagsTable); - org.openide.awt.Mnemonics.setLocalizedText(preexistingLabel, org.openide.util.NbBundle.getMessage(CreateTagDialog.class, "CreateTagDialog.preexistingLabel.text")); // NOI18N + org.openide.awt.Mnemonics.setLocalizedText(preexistingLabel, org.openide.util.NbBundle.getMessage(GetTagNameDialog.class, "GetTagNameDialog.preexistingLabel.text")); // NOI18N - newTagPanel.setBorder(javax.swing.BorderFactory.createTitledBorder(org.openide.util.NbBundle.getMessage(CreateTagDialog.class, "CreateTagDialog.newTagPanel.border.title"))); // NOI18N + newTagPanel.setBorder(javax.swing.BorderFactory.createTitledBorder(org.openide.util.NbBundle.getMessage(GetTagNameDialog.class, "GetTagNameDialog.newTagPanel.border.title"))); // NOI18N - org.openide.awt.Mnemonics.setLocalizedText(tagNameLabel, org.openide.util.NbBundle.getMessage(CreateTagDialog.class, "CreateTagDialog.tagNameLabel.text")); // NOI18N + org.openide.awt.Mnemonics.setLocalizedText(tagNameLabel, org.openide.util.NbBundle.getMessage(GetTagNameDialog.class, "GetTagNameDialog.tagNameLabel.text")); // NOI18N - tagNameField.setText(org.openide.util.NbBundle.getMessage(CreateTagDialog.class, "CreateTagDialog.tagNameField.text")); // NOI18N + tagNameField.setText(org.openide.util.NbBundle.getMessage(GetTagNameDialog.class, "GetTagNameDialog.tagNameField.text")); // NOI18N tagNameField.addKeyListener(new java.awt.event.KeyAdapter() { public void keyReleased(java.awt.event.KeyEvent evt) { tagNameFieldKeyReleased(evt); @@ -211,20 +269,39 @@ public class CreateTagDialog extends JDialog { }// //GEN-END:initComponents private void cancelButtonActionPerformed(java.awt.event.ActionEvent evt) {//GEN-FIRST:event_cancelButtonActionPerformed - newTagName = null; + tagName = null; dispose(); }//GEN-LAST:event_cancelButtonActionPerformed private void okButtonActionPerformed(java.awt.event.ActionEvent evt) {//GEN-FIRST:event_okButtonActionPerformed - String tagName = tagNameField.getText(); - if (tagName.isEmpty()) { + String tagDisplayName = tagNameField.getText(); + if (tagDisplayName.isEmpty()) { JOptionPane.showMessageDialog(null, "Must supply a tag name to continue.", "Tag Name", JOptionPane.ERROR_MESSAGE); - } else if (containsIllegalCharacters(tagName)) { - JOptionPane.showMessageDialog(null, "The tag name contains illegal characters.\nCannot contain any of the following symbols: \\ : * ? \" < > |", - "Illegal Characters", JOptionPane.ERROR_MESSAGE); - } else { - newTagName = tagName; - dispose(); + } + else if (containsIllegalCharacters(tagDisplayName)) { + JOptionPane.showMessageDialog(null, "The tag name contains illegal characters.\nCannot contain any of the following symbols: \\ : * ? \" < > |", "Illegal Characters", JOptionPane.ERROR_MESSAGE); + } + else { + tagName = tagNames.get(tagDisplayName); + if (tagName == null) { + try { + tagName = Case.getCurrentCase().getServices().getTagsManager().addTagName(tagDisplayName); + dispose(); + } + catch (TskCoreException ex) { + Logger.getLogger(AddTagAction.class.getName()).log(Level.SEVERE, "Error adding " + tagDisplayName + " tag name", ex); + JOptionPane.showMessageDialog(null, "Unable to add the " + tagDisplayName + " tag name to the case.", "Tagging Error", JOptionPane.ERROR_MESSAGE); + tagName = null; + } + catch (TagsManager.TagNameAlreadyExistsException ex) { + Logger.getLogger(AddTagAction.class.getName()).log(Level.SEVERE, "Error adding " + tagDisplayName + " tag name", ex); + JOptionPane.showMessageDialog(null, "A " + tagDisplayName + " tag name has already been defined.", "Duplicate Tag Error", JOptionPane.ERROR_MESSAGE); + tagName = null; + } + } + else { + dispose(); + } } }//GEN-LAST:event_okButtonActionPerformed @@ -251,32 +328,5 @@ public class CreateTagDialog extends JDialog { private javax.swing.JTable tagsTable; // End of variables declaration//GEN-END:variables - private class TagsTableModel extends AbstractTableModel { - List tagNames; - - TagsTableModel() { - tagNames = new ArrayList<>(Tags.getAllTagNames()); - } - - @Override - public int getRowCount() { - return tagNames.size(); - } - - @Override - public boolean isCellEditable(int rowIndex, int columnIndex) { - return false; - } - - @Override - public int getColumnCount() { - return 1; - } - - @Override - public String getValueAt(int rowIndex, int columnIndex) { - return tagNames.get(rowIndex); - } - } } diff --git a/Core/src/org/sleuthkit/autopsy/actions/TagAction.java b/Core/src/org/sleuthkit/autopsy/actions/TagAction.java new file mode 100755 index 0000000000..0d6e74efd3 --- /dev/null +++ b/Core/src/org/sleuthkit/autopsy/actions/TagAction.java @@ -0,0 +1,62 @@ +/* + * Autopsy Forensic Browser + * + * Copyright 2013 Basis Technology Corp. + * Contact: carrier sleuthkit org + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.sleuthkit.autopsy.actions; + +import java.awt.event.ActionEvent; +import javax.swing.AbstractAction; +import org.sleuthkit.autopsy.directorytree.DirectoryTreeTopComponent; +import org.sleuthkit.datamodel.BlackboardArtifact; + +/** + * Abstract base class for Actions involving tags. + */ +public abstract class TagAction extends AbstractAction { + public TagAction(String menuText) { + super(menuText); + } + + @Override + public void actionPerformed(ActionEvent event) { + doAction(event); + refreshDirectoryTree(); + } + + /** + * Derived classes must implement this Template Method for actionPerformed(). + * @param event ActionEvent object passed to actionPerformed() + */ + abstract protected void doAction(ActionEvent event); + + /** + * Derived classes should call this method any time a tag is created, updated + * or deleted outside of an actionPerformed() call. + */ + protected void refreshDirectoryTree() { + // The way the "directory tree" currently works, a new tags sub-tree + // needs to be made to reflect the results of invoking tag Actions. The + // way to do this is to call DirectoryTreeTopComponent.refreshTree(), + // which calls RootContentChildren.refreshKeys(BlackboardArtifact.ARTIFACT_TYPE... types) + // for the RootContentChildren object that is the child factory for the + // ResultsNode that is the root of the tags sub-tree. There is a switch + // statement in RootContentChildren.refreshKeys() that maps both + // BlackboardArtifact.ARTIFACT_TYPE.TSK_TAG_FILE and BlackboardArtifact.ARTIFACT_TYPE.TSK_TAG_ARTIFACT + // to making a call to refreshKey(TagsNodeKey). + DirectoryTreeTopComponent.findInstance().refreshTree(BlackboardArtifact.ARTIFACT_TYPE.TSK_TAG_FILE); + } +} diff --git a/Core/src/org/sleuthkit/autopsy/casemodule/Case.java b/Core/src/org/sleuthkit/autopsy/casemodule/Case.java index 29151bc6ea..08ec802b95 100644 --- a/Core/src/org/sleuthkit/autopsy/casemodule/Case.java +++ b/Core/src/org/sleuthkit/autopsy/casemodule/Case.java @@ -109,6 +109,71 @@ public class Case implements SleuthkitCase.ErrorObserver { // pcs is initialized in CaseListener constructor private static final PropertyChangeSupport pcs = new PropertyChangeSupport(Case.class); + /** + * Events that the case module will fire. Event listeners can get the event + * name by using String returned by toString() method on a specific event. + */ + /* @@@ BC: I added this as a place holder for what I want this to be, but + * this is not the time to change it. We'll start using this at a major release + * version. + */ + private enum CaseModuleEvent_DoNotUse { + /** + * Property name that indicates the name of the current case has changed. + * Fired with the case is renamed, and when the current case is + * opened/closed/changed. The value is a String: the name of the case. The + * empty string ("") is used for no open case. + */ + // @@@ BC: I propose that this is no longer called for case open/close. + CASE_NAME("caseName"), + + /** + * Property name that indicates the number of the current case has changed. + * Fired with the case number is changed. The value is an int: the number of + * the case. -1 is used for no case number set. + */ + CASE_NUMBER("caseNumber"), + + /** + * Property name that indicates the examiner of the current case has + * changed. Fired with the case examiner is changed. The value is a String: + * the name of the examiner. The empty string ("") is used for no examiner + * set. + */ + CASE_EXAMINER("caseExaminer"), + + /** + * Property name that indicates a new data source (image, disk or local + * file) has been added to the current case. The new value is the + * newly-added instance of the new data source, and the old value is always + * null. + */ + CASE_ADD_DATA_SOURCE("addDataSource"), + + /** + * Property name that indicates a data source has been removed from the + * current case. The "old value" is the (int) content ID of the data source + * that was removed, the new value is the instance of the data source. + */ + CASE_DEL_DATA_SOURCE("removeDataSource"), + + /** + * Property name that indicates the currently open case has changed. The new + * value is the instance of the opened Case, or null if there is no open + * case. The old value is the instance of the closed Case, or null if there + * was no open case. + */ + CASE_CURRENT_CASE("currentCase"); + + private String name; + CaseModuleEvent_DoNotUse(String name) { + this.name = name; + } + + public String getName() { + return this.name; + } + }; private String name; diff --git a/Core/src/org/sleuthkit/autopsy/casemodule/NewCaseVisualPanel1.form b/Core/src/org/sleuthkit/autopsy/casemodule/NewCaseVisualPanel1.form index ab1236901b..2a2c068ccf 100644 --- a/Core/src/org/sleuthkit/autopsy/casemodule/NewCaseVisualPanel1.form +++ b/Core/src/org/sleuthkit/autopsy/casemodule/NewCaseVisualPanel1.form @@ -30,7 +30,7 @@ - + @@ -51,7 +51,7 @@ - + diff --git a/Core/src/org/sleuthkit/autopsy/casemodule/NewCaseVisualPanel1.java b/Core/src/org/sleuthkit/autopsy/casemodule/NewCaseVisualPanel1.java index fd399a59ce..3c4dd72bcf 100644 --- a/Core/src/org/sleuthkit/autopsy/casemodule/NewCaseVisualPanel1.java +++ b/Core/src/org/sleuthkit/autopsy/casemodule/NewCaseVisualPanel1.java @@ -93,7 +93,7 @@ final class NewCaseVisualPanel1 extends JPanel implements DocumentListener{ jLabel2 = new javax.swing.JLabel(); caseDirTextField = new javax.swing.JTextField(); - jLabel1.setFont(new java.awt.Font("Tahoma", 1, 14)); + jLabel1.setFont(new java.awt.Font("Tahoma", 1, 14)); // NOI18N org.openide.awt.Mnemonics.setLocalizedText(jLabel1, org.openide.util.NbBundle.getMessage(NewCaseVisualPanel1.class, "NewCaseVisualPanel1.jLabel1.text_1")); // NOI18N org.openide.awt.Mnemonics.setLocalizedText(caseNameLabel, org.openide.util.NbBundle.getMessage(NewCaseVisualPanel1.class, "NewCaseVisualPanel1.caseNameLabel.text_1")); // NOI18N @@ -133,7 +133,7 @@ final class NewCaseVisualPanel1 extends JPanel implements DocumentListener{ .addComponent(caseParentDirTextField, javax.swing.GroupLayout.PREFERRED_SIZE, 296, javax.swing.GroupLayout.PREFERRED_SIZE)) .addGroup(javax.swing.GroupLayout.Alignment.LEADING, layout.createSequentialGroup() .addComponent(caseNameLabel) - .addGap(26, 26, 26) + .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.RELATED, javax.swing.GroupLayout.DEFAULT_SIZE, Short.MAX_VALUE) .addComponent(caseNameTextField, javax.swing.GroupLayout.PREFERRED_SIZE, 296, javax.swing.GroupLayout.PREFERRED_SIZE)) .addComponent(caseDirTextField, javax.swing.GroupLayout.Alignment.LEADING, javax.swing.GroupLayout.PREFERRED_SIZE, 380, javax.swing.GroupLayout.PREFERRED_SIZE)) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.UNRELATED) @@ -148,7 +148,7 @@ final class NewCaseVisualPanel1 extends JPanel implements DocumentListener{ .addGap(18, 18, 18) .addGroup(layout.createParallelGroup(javax.swing.GroupLayout.Alignment.BASELINE) .addComponent(caseNameLabel) - .addComponent(caseNameTextField, javax.swing.GroupLayout.PREFERRED_SIZE, 20, javax.swing.GroupLayout.PREFERRED_SIZE)) + .addComponent(caseNameTextField, javax.swing.GroupLayout.PREFERRED_SIZE, javax.swing.GroupLayout.DEFAULT_SIZE, javax.swing.GroupLayout.PREFERRED_SIZE)) .addPreferredGap(javax.swing.LayoutStyle.ComponentPlacement.UNRELATED) .addGroup(layout.createParallelGroup(javax.swing.GroupLayout.Alignment.BASELINE) .addComponent(caseDirLabel) diff --git a/Core/src/org/sleuthkit/autopsy/casemodule/services/Services.java b/Core/src/org/sleuthkit/autopsy/casemodule/services/Services.java old mode 100644 new mode 100755 index 8718406d5c..069b13ef2e --- a/Core/src/org/sleuthkit/autopsy/casemodule/services/Services.java +++ b/Core/src/org/sleuthkit/autopsy/casemodule/services/Services.java @@ -2,7 +2,7 @@ * * Autopsy Forensic Browser * - * Copyright 2012 Basis Technology Corp. + * Copyright 2012-2013 Basis Technology Corp. * * Copyright 2012 42six Solutions. * Contact: aebadirad 42six com @@ -37,21 +37,29 @@ public class Services implements Closeable { // NOTE: all new services added to Services class must be added to this list // of services. - private List services = new ArrayList(); + private List services = new ArrayList<>(); // services private FileManager fileManager; + private TagsManager tagsManager; public Services(SleuthkitCase tskCase) { this.tskCase = tskCase; //create and initialize FileManager as early as possibly in the new/opened Case fileManager = new FileManager(tskCase); services.add(fileManager); + + tagsManager = new TagsManager(tskCase); + services.add(tagsManager); } public FileManager getFileManager() { return fileManager; } + + public TagsManager getTagsManager() { + return tagsManager; + } @Override public void close() throws IOException { diff --git a/Core/src/org/sleuthkit/autopsy/casemodule/services/TagsManager.java b/Core/src/org/sleuthkit/autopsy/casemodule/services/TagsManager.java new file mode 100755 index 0000000000..67788f8300 --- /dev/null +++ b/Core/src/org/sleuthkit/autopsy/casemodule/services/TagsManager.java @@ -0,0 +1,464 @@ +/* + * Autopsy Forensic Browser + * + * Copyright 2013 Basis Technology Corp. + * Contact: carrier sleuthkit org + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.sleuthkit.autopsy.casemodule.services; + +import java.io.Closeable; +import java.io.IOException; +import java.util.Arrays; +import java.util.HashMap; +import java.util.List; +import java.util.logging.Level; +import java.util.logging.Logger; +import org.sleuthkit.autopsy.coreutils.ModuleSettings; +import org.sleuthkit.datamodel.BlackboardArtifact; +import org.sleuthkit.datamodel.BlackboardArtifactTag; +import org.sleuthkit.datamodel.Content; +import org.sleuthkit.datamodel.ContentTag; +import org.sleuthkit.datamodel.SleuthkitCase; +import org.sleuthkit.datamodel.TagName; +import org.sleuthkit.datamodel.TskCoreException; + +/** + * A per case instance of this class functions as an Autopsy service that + * manages the creation, updating, and deletion of tags applied to content and + * blackboard artifacts by users. + */ +public class TagsManager implements Closeable { + private static final String TAGS_SETTINGS_NAME = "Tags"; + private static final String TAG_NAMES_SETTING_KEY = "TagNames"; + private final SleuthkitCase tskCase; + private final HashMap uniqueTagNames = new HashMap<>(); + private boolean tagNamesInitialized = false; // @@@ This is part of a work around to be removed when database access on the EDT is correctly synchronized. + + // Use this exception and the member hash map to manage uniqueness of hash + // names. This is deemed more proactive and informative than leaving this to + // the UNIQUE constraint on the display_name field of the tag_names table in + // the case database. + public class TagNameAlreadyExistsException extends Exception { + } + + /** + * Package-scope constructor for use of the Services class. An instance of + * TagsManager should be created for each case that is opened. + * @param [in] tskCase The SleuthkitCase object for the current case. + */ + TagsManager(SleuthkitCase tskCase) { + this.tskCase = tskCase; + // @@@ The removal of this call is a work around until database access on the EDT is correctly synchronized. + // getExistingTagNames(); + } + + /** + * Gets a list of all tag names currently available for tagging content or + * blackboard artifacts. + * @return A list, possibly empty, of TagName data transfer objects (DTOs). + * @throws TskCoreException + */ + public synchronized List getAllTagNames() throws TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + return tskCase.getAllTagNames(); + } + + /** + * Gets a list of all tag names currently used for tagging content or + * blackboard artifacts. + * @return A list, possibly empty, of TagName data transfer objects (DTOs). + * @throws TskCoreException + */ + public synchronized List getTagNamesInUse() throws TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + return tskCase.getTagNamesInUse(); + } + + /** + * Checks whether a tag name with a given display name exists. + * @param [in] tagDisplayName The display name for which to check. + * @return True if the tag name exists, false otherwise. + */ + public synchronized boolean tagNameExists(String tagDisplayName) { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + return uniqueTagNames.containsKey(tagDisplayName); + } + + /** + * Adds a new tag name to the current case and to the tags settings. + * @param [in] displayName The display name for the new tag name. + * @return A TagName data transfer object (DTO) representing the new tag name. + * @throws TagNameAlreadyExistsException, TskCoreException + */ + public TagName addTagName(String displayName) throws TagNameAlreadyExistsException, TskCoreException { + return addTagName(displayName, "", TagName.HTML_COLOR.NONE); + } + + /** + * Adds a new tag name to the current case and to the tags settings. + * @param [in] displayName The display name for the new tag name. + * @param [in] description The description for the new tag name. + * @return A TagName data transfer object (DTO) representing the new tag name. + * @throws TagNameAlreadyExistsException, TskCoreException + */ + public TagName addTagName(String displayName, String description) throws TagNameAlreadyExistsException, TskCoreException { + return addTagName(displayName, description, TagName.HTML_COLOR.NONE); + } + + /** + * Adds a new tag name to the current case and to the tags settings. + * @param [in] displayName The display name for the new tag name. + * @param [in] description The description for the new tag name. + * @param [in] color The HTML color to associate with the new tag name. + * @return A TagName data transfer object (DTO) representing the new tag name. + * @throws TagNameAlreadyExistsException, TskCoreException + */ + public synchronized TagName addTagName(String displayName, String description, TagName.HTML_COLOR color) throws TagNameAlreadyExistsException, TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + if (uniqueTagNames.containsKey(displayName)) { + throw new TagNameAlreadyExistsException(); + } + + // Add the tag name to the case. + TagName newTagName = tskCase.addTagName(displayName, description, color); + + // Add the tag name to the tags settings. + uniqueTagNames.put(newTagName.getDisplayName(), newTagName); + saveTagNamesToTagsSettings(); + + return newTagName; + } + + /** + * Tags a content object. + * @param [in] content The content to tag. + * @param [in] tagName The name to use for the tag. + * @return A ContentTag data transfer object (DTO) representing the new tag. + * @throws TskCoreException + */ + public ContentTag addContentTag(Content content, TagName tagName) throws TskCoreException { + return addContentTag(content, tagName, "", -1, -1); + } + + /** + * Tags a content object. + * @param [in] content The content to tag. + * @param [in] tagName The name to use for the tag. + * @param [in] comment A comment to store with the tag. + * @return A ContentTag data transfer object (DTO) representing the new tag. + * @throws TskCoreException + */ + public ContentTag addContentTag(Content content, TagName tagName, String comment) throws TskCoreException { + return addContentTag(content, tagName, comment, -1, -1); + } + + /** + * Tags a content object or a section of a content object. + * @param [in] content The content to tag. + * @param [in] tagName The name to use for the tag. + * @param [in] comment A comment to store with the tag. + * @param [in] beginByteOffset Designates the beginning of a tagged section. + * @param [in] endByteOffset Designates the end of a tagged section. + * @return A ContentTag data transfer object (DTO) representing the new tag. + * @throws IllegalArgumentException, TskCoreException + */ + public synchronized ContentTag addContentTag(Content content, TagName tagName, String comment, long beginByteOffset, long endByteOffset) throws IllegalArgumentException, TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + if (beginByteOffset >= 0 && endByteOffset >= 1) { + if (beginByteOffset > content.getSize() - 1) { + throw new IllegalArgumentException("beginByteOffset = " + beginByteOffset + " out of content size range (0 - " + (content.getSize() - 1) + ")"); + } + + if (endByteOffset > content.getSize() - 1) { + throw new IllegalArgumentException("endByteOffset = " + endByteOffset + " out of content size range (0 - " + (content.getSize() - 1) + ")"); + } + + if (endByteOffset < beginByteOffset) { + throw new IllegalArgumentException("endByteOffset < beginByteOffset"); + } + } + + return tskCase.addContentTag(content, tagName, comment, beginByteOffset, endByteOffset); + } + + /** + * Deletes a content tag. + * @param [in] tag The tag to delete. + * @throws TskCoreException + */ + public synchronized void deleteContentTag(ContentTag tag) throws TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + tskCase.deleteContentTag(tag); + } + + /** + * Gets all content tags for the current case. + * @return A list, possibly empty, of content tags. + * @throws TskCoreException + */ + public List getAllContentTags() throws TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + return tskCase.getAllContentTags(); + } + + /** + * Gets content tags count by tag name. + * @param [in] tagName The tag name of interest. + * @return A count of the content tags with the specified tag name. + * @throws TskCoreException + */ + public synchronized long getContentTagsCountByTagName(TagName tagName) throws TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + return tskCase.getContentTagsCountByTagName(tagName); + } + + /** + * Gets content tags by tag name. + * @param [in] tagName The tag name of interest. + * @return A list, possibly empty, of the content tags with the specified tag name. + * @throws TskCoreException + */ + public synchronized List getContentTagsByTagName(TagName tagName) throws TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + return tskCase.getContentTagsByTagName(tagName); + } + + /** + * Gets content tags count by content. + * @param [in] content The content of interest. + * @return A list, possibly empty, of the tags that have been applied to the artifact. + * @throws TskCoreException + */ + public synchronized List getContentTagsByContent(Content content) throws TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + return tskCase.getContentTagsByContent(content); + } + + /** + * Tags a blackboard artifact object. + * @param [in] artifact The blackboard artifact to tag. + * @param [in] tagName The name to use for the tag. + * @return A BlackboardArtifactTag data transfer object (DTO) representing the new tag. + * @throws TskCoreException + */ + public BlackboardArtifactTag addBlackboardArtifactTag(BlackboardArtifact artifact, TagName tagName) throws TskCoreException { + return addBlackboardArtifactTag(artifact, tagName, ""); + } + + /** + * Tags a blackboard artifact object. + * @param [in] artifact The blackboard artifact to tag. + * @param [in] tagName The name to use for the tag. + * @param [in] comment A comment to store with the tag. + * @return A BlackboardArtifactTag data transfer object (DTO) representing the new tag. + * @throws TskCoreException + */ + public synchronized BlackboardArtifactTag addBlackboardArtifactTag(BlackboardArtifact artifact, TagName tagName, String comment) throws TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + return tskCase.addBlackboardArtifactTag(artifact, tagName, comment); + } + + /** + * Deletes a blackboard artifact tag. + * @param [in] tag The tag to delete. + * @throws TskCoreException + */ + public synchronized void deleteBlackboardArtifactTag(BlackboardArtifactTag tag) throws TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + tskCase.deleteBlackboardArtifactTag(tag); + } + + /** + * Gets all blackboard artifact tags for the current case. + * @return A list, possibly empty, of blackboard artifact tags. + * @throws TskCoreException + */ + public List getAllBlackboardArtifactTags() throws TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + return tskCase.getAllBlackboardArtifactTags(); + } + + /** + * Gets blackboard artifact tags count by tag name. + * @param [in] tagName The tag name of interest. + * @return A count of the blackboard artifact tags with the specified tag name. + * @throws TskCoreException + */ + public synchronized long getBlackboardArtifactTagsCountByTagName(TagName tagName) throws TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + return tskCase.getBlackboardArtifactTagsCountByTagName(tagName); + } + + /** + * Gets blackboard artifact tags by tag name. + * @param [in] tagName The tag name of interest. + * @return A list, possibly empty, of the blackboard artifact tags with the specified tag name. + * @throws TskCoreException + */ + public synchronized List getBlackboardArtifactTagsByTagName(TagName tagName) throws TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + return tskCase.getBlackboardArtifactTagsByTagName(tagName); + } + + /** + * Gets blackboard artifact tags for a particular blackboard artifact. + * @param [in] artifact The blackboard artifact of interest. + * @return A list, possibly empty, of the tags that have been applied to the artifact. + * @throws TskCoreException + */ + public synchronized List getBlackboardArtifactTagsByArtifact(BlackboardArtifact artifact) throws TskCoreException { + // @@@ This is a work around to be removed when database access on the EDT is correctly synchronized. + if (!tagNamesInitialized) { + getExistingTagNames(); + } + + return tskCase.getBlackboardArtifactTagsByArtifact(artifact); + } + + @Override + public void close() throws IOException { + saveTagNamesToTagsSettings(); + } + + private void getExistingTagNames() { + getTagNamesFromCurrentCase(); + getTagNamesFromTagsSettings(); + getPredefinedTagNames(); + saveTagNamesToTagsSettings(); + tagNamesInitialized = true; // @@@ This is part of a work around to be removed when database access on the EDT is correctly synchronized. + } + + private void getTagNamesFromCurrentCase() { + try { + List currentTagNames = tskCase.getAllTagNames(); + for (TagName tagName : currentTagNames) { + uniqueTagNames.put(tagName.getDisplayName(), tagName); + } + } + catch (TskCoreException ex) { + Logger.getLogger(TagsManager.class.getName()).log(Level.SEVERE, "Failed to get tag types from the current case", ex); + } + } + + private void getTagNamesFromTagsSettings() { + String setting = ModuleSettings.getConfigSetting(TAGS_SETTINGS_NAME, TAG_NAMES_SETTING_KEY); + if (null != setting && !setting.isEmpty()) { + // Read the tag name setting and break it into tag name tuples. + List tagNameTuples = Arrays.asList(setting.split(";")); + + // Parse each tuple and add the tag names to the current case, one + // at a time to gracefully discard any duplicates or corrupt tuples. + for (String tagNameTuple : tagNameTuples) { + String[] tagNameAttributes = tagNameTuple.split(","); + if (!uniqueTagNames.containsKey(tagNameAttributes[0])) { + try { + TagName tagName = tskCase.addTagName(tagNameAttributes[0], tagNameAttributes[1], TagName.HTML_COLOR.getColorByName(tagNameAttributes[2])); + uniqueTagNames.put(tagName.getDisplayName(), tagName); + } + catch (TskCoreException ex) { + Logger.getLogger(TagsManager.class.getName()).log(Level.SEVERE, "Failed to add saved tag name " + tagNameAttributes[0], ex); + } + } + } + } + } + + private void getPredefinedTagNames() { + if (!uniqueTagNames.containsKey("Bookmark")) { + try { + TagName tagName = tskCase.addTagName("Bookmark", "", TagName.HTML_COLOR.NONE); + uniqueTagNames.put(tagName.getDisplayName(), tagName); + } + catch (TskCoreException ex) { + Logger.getLogger(TagsManager.class.getName()).log(Level.SEVERE, "Failed to add predefined 'Bookmark' tag name", ex); + } + } + } + + private void saveTagNamesToTagsSettings() { + if (!uniqueTagNames.isEmpty()) { + StringBuilder setting = new StringBuilder(); + for (TagName tagName : uniqueTagNames.values()) { + if (setting.length() != 0) { + setting.append(";"); + } + setting.append(tagName.getDisplayName()).append(","); + setting.append(tagName.getDescription()).append(","); + setting.append(tagName.getColor().name()); + } + ModuleSettings.setConfigSetting(TAGS_SETTINGS_NAME, TAG_NAMES_SETTING_KEY, setting.toString()); + } + } +} diff --git a/Core/src/org/sleuthkit/autopsy/core/layer.xml b/Core/src/org/sleuthkit/autopsy/core/layer.xml index f43b8285d6..54bec76816 100644 --- a/Core/src/org/sleuthkit/autopsy/core/layer.xml +++ b/Core/src/org/sleuthkit/autopsy/core/layer.xml @@ -310,6 +310,11 @@ + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/HashDatabase/src/org/sleuthkit/autopsy/hashdatabase/IndexStatus.java b/HashDatabase/src/org/sleuthkit/autopsy/hashdatabase/IndexStatus.java deleted file mode 100644 index 9867257b68..0000000000 --- a/HashDatabase/src/org/sleuthkit/autopsy/hashdatabase/IndexStatus.java +++ /dev/null @@ -1,67 +0,0 @@ -/* - * Autopsy Forensic Browser - * - * Copyright 2011 Basis Technology Corp. - * Contact: carrier sleuthkit org - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.sleuthkit.autopsy.hashdatabase; - -/** - * The status of a HashDb as determined from its indexExists(), - * databaseExists(), and isOutdated() methods - * @author pmartel - */ -enum IndexStatus { - - /** - * The index exists but the database does not. This indicates a text index - * without an accompanying text database. - */ - INDEX_ONLY("Index only"), - /** - * The database exists but the index does not. This indicates a text database - * with no index. - */ - NO_INDEX("No index"), - /** - * The index is currently being generated. - */ - INDEXING("Index is currently being generated"), - /** - * The index is generated. - */ - INDEXED("Indexed"); - - private String message; - - /** - * @param message Short description of the state represented - */ - private IndexStatus(String message) { - this.message = message; - } - - /** - * Get status message - * @return a short description of the state represented - */ - String message() { - return this.message; - } - - public static boolean isIngestible(IndexStatus status) { - return status == INDEX_ONLY || status == INDEXED; - } -} diff --git a/HashDatabase/src/org/sleuthkit/autopsy/hashdatabase/ModalNoButtons.java b/HashDatabase/src/org/sleuthkit/autopsy/hashdatabase/ModalNoButtons.java index 4d7e6f38c4..432a4011f2 100644 --- a/HashDatabase/src/org/sleuthkit/autopsy/hashdatabase/ModalNoButtons.java +++ b/HashDatabase/src/org/sleuthkit/autopsy/hashdatabase/ModalNoButtons.java @@ -1,7 +1,7 @@ /* * Autopsy Forensic Browser * - * Copyright 2011 Basis Technology Corp. + * Copyright 2011 - 2013 Basis Technology Corp. * Contact: carrier sleuthkit org * * Licensed under the Apache License, Version 2.0 (the "License"); @@ -21,12 +21,13 @@ package org.sleuthkit.autopsy.hashdatabase; import java.beans.PropertyChangeEvent; import java.beans.PropertyChangeListener; +import java.io.File; import java.util.ArrayList; import java.util.List; import java.util.logging.Level; import javax.swing.JOptionPane; import org.sleuthkit.autopsy.coreutils.Logger; -import org.sleuthkit.datamodel.TskException; +import org.sleuthkit.datamodel.TskCoreException; /** * This class exists as a stop-gap measure to force users to have an indexed database. @@ -40,9 +41,10 @@ import org.sleuthkit.datamodel.TskException; */ class ModalNoButtons extends javax.swing.JDialog implements PropertyChangeListener { + private static final String INDEX_FILE_EXTENSION = ".kdb"; List unindexed; HashDb toIndex; - HashDbManagementPanel hdbmp; + HashDbConfigPanel hdbmp; int length = 0; int currentcount = 1; String currentDb = ""; @@ -53,7 +55,7 @@ class ModalNoButtons extends javax.swing.JDialog implements PropertyChangeListen * @param parent Swing parent frame. * @param unindexed the list of unindexed databases to index. */ - ModalNoButtons(HashDbManagementPanel hdbmp, java.awt.Frame parent, List unindexed) { + ModalNoButtons(HashDbConfigPanel hdbmp, java.awt.Frame parent, List unindexed) { super(parent, "Indexing databases", true); this.unindexed = unindexed; this.toIndex = null; @@ -68,7 +70,7 @@ class ModalNoButtons extends javax.swing.JDialog implements PropertyChangeListen * @param parent Swing parent frame. * @param unindexed The unindexed database to index. */ - ModalNoButtons(HashDbManagementPanel hdbmp, java.awt.Frame parent, HashDb unindexed){ + ModalNoButtons(HashDbConfigPanel hdbmp, java.awt.Frame parent, HashDb unindexed){ super(parent, "Indexing database", true); this.unindexed = null; this.toIndex = unindexed; @@ -165,7 +167,6 @@ class ModalNoButtons extends javax.swing.JDialog implements PropertyChangeListen * @param evt mouse click event */ private void CANCEL_BUTTONMouseClicked(java.awt.event.MouseEvent evt) {//GEN-FIRST:event_CANCEL_BUTTONMouseClicked - // TODO add your handling code here: String message = "You are about to exit out of indexing your hash databases. \n" + "The generated index will be left unusable. If you choose to continue,\n " + "please delete the corresponding -md5.idx file in the hash folder.\n" @@ -173,7 +174,7 @@ class ModalNoButtons extends javax.swing.JDialog implements PropertyChangeListen int res = JOptionPane.showConfirmDialog(this, message, "Unfinished Indexing", JOptionPane.YES_NO_OPTION); if(res == JOptionPane.YES_OPTION){ - List remove = new ArrayList(); + List remove = new ArrayList<>(); if(this.toIndex == null){ remove = this.unindexed; } @@ -203,17 +204,13 @@ class ModalNoButtons extends javax.swing.JDialog implements PropertyChangeListen */ private void indexThis() { this.INDEXING_PROGBAR.setIndeterminate(true); - currentDb = this.toIndex.getDisplayName(); + currentDb = this.toIndex.getHashSetName(); this.CURRENTDB_LABEL.setText("(" + currentDb + ")"); this.length = 1; this.CURRENTLYON_LABEL.setText("Currently indexing 1 database"); if (!this.toIndex.isIndexing()) { this.toIndex.addPropertyChangeListener(this); - try { - this.toIndex.createIndex(); - } catch (TskException se) { - Logger.getLogger(ModalNoButtons.class.getName()).log(Level.WARNING, "Error making TSK index", se); - } + this.toIndex.createIndex(okToDeleteOldIndexFile(toIndex)); } } @@ -224,16 +221,12 @@ class ModalNoButtons extends javax.swing.JDialog implements PropertyChangeListen length = this.unindexed.size(); this.INDEXING_PROGBAR.setIndeterminate(true); for (HashDb db : this.unindexed) { - currentDb = db.getDisplayName(); + currentDb = db.getHashSetName(); this.CURRENTDB_LABEL.setText("(" + currentDb + ")"); this.CURRENTLYON_LABEL.setText("Currently indexing 1 of " + length); if (!db.isIndexing()) { db.addPropertyChangeListener(this); - try { - db.createIndex(); - } catch (TskException e) { - Logger.getLogger(ModalNoButtons.class.getName()).log(Level.WARNING, "Error making TSK index", e); - } + db.createIndex(okToDeleteOldIndexFile(db)); } } } @@ -250,7 +243,7 @@ class ModalNoButtons extends javax.swing.JDialog implements PropertyChangeListen * Displays the current count of indexing when one is completed, or kills this dialog if all indexing is complete. */ public void propertyChange(PropertyChangeEvent evt) { - if (evt.getPropertyName().equals(HashDb.EVENT.INDEXING_DONE.name())) { + if (evt.getPropertyName().equals(HashDb.Event.INDEXING_DONE.name())) { if (currentcount >= length) { this.INDEXING_PROGBAR.setValue(100); this.setModal(false); @@ -263,4 +256,22 @@ class ModalNoButtons extends javax.swing.JDialog implements PropertyChangeListen } } } + + private boolean okToDeleteOldIndexFile(HashDb hashDb) { + boolean deleteOldIndexFile = true; + try { + if (hashDb.hasLookupIndex()) { + String indexPath = hashDb.getIndexPath(); + File indexFile = new File(indexPath); + if (!indexPath.endsWith(INDEX_FILE_EXTENSION)) { + deleteOldIndexFile = JOptionPane.showConfirmDialog(this, "Updating index file format, delete " + indexFile.getName() + " file that uses the old file format?", "Delete Obsolete Index File", JOptionPane.YES_NO_OPTION) == JOptionPane.YES_OPTION; + } + } + } + catch (TskCoreException ex) { + Logger.getLogger(HashDbConfigPanel.class.getName()).log(Level.SEVERE, "Error getting index info for hash database", ex); + JOptionPane.showMessageDialog(null, "Error gettting index information for " + hashDb.getHashSetName() + " hash database. Cannot perform indexing operation.", "Hash Database Index Status Error", JOptionPane.ERROR_MESSAGE); + } + return deleteOldIndexFile; + } } diff --git a/KeywordSearch/manifest.mf b/KeywordSearch/manifest.mf index dd9e48a200..e309652025 100644 --- a/KeywordSearch/manifest.mf +++ b/KeywordSearch/manifest.mf @@ -1,9 +1,9 @@ -Manifest-Version: 1.0 -AutoUpdate-Show-In-Client: true -OpenIDE-Module: org.sleuthkit.autopsy.keywordsearch/5 -OpenIDE-Module-Implementation-Version: 9 -OpenIDE-Module-Install: org/sleuthkit/autopsy/keywordsearch/Installer.class -OpenIDE-Module-Layer: org/sleuthkit/autopsy/keywordsearch/layer.xml -OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/keywordsearch/Bundle.properties -OpenIDE-Module-Requires: org.openide.windows.WindowManager - +Manifest-Version: 1.0 +AutoUpdate-Show-In-Client: true +OpenIDE-Module: org.sleuthkit.autopsy.keywordsearch/5 +OpenIDE-Module-Implementation-Version: 9 +OpenIDE-Module-Install: org/sleuthkit/autopsy/keywordsearch/Installer.class +OpenIDE-Module-Layer: org/sleuthkit/autopsy/keywordsearch/layer.xml +OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/keywordsearch/Bundle.properties +OpenIDE-Module-Requires: org.openide.windows.WindowManager + diff --git a/KeywordSearch/nbproject/project.properties b/KeywordSearch/nbproject/project.properties index 140caac79c..4f3228693f 100644 --- a/KeywordSearch/nbproject/project.properties +++ b/KeywordSearch/nbproject/project.properties @@ -1,6 +1,6 @@ -javac.source=1.7 -javac.compilerargs=-Xlint -Xlint:-serial -license.file=../LICENSE-2.0.txt -nbm.homepage=http://www.sleuthkit.org/autopsy/ -nbm.needs.restart=true -spec.version.base=3.2 +javac.source=1.7 +javac.compilerargs=-Xlint -Xlint:-serial +license.file=../LICENSE-2.0.txt +nbm.homepage=http://www.sleuthkit.org/autopsy/ +nbm.needs.restart=true +spec.version.base=3.2 diff --git a/KeywordSearch/nbproject/project.xml b/KeywordSearch/nbproject/project.xml index 215c6068ba..455c7745da 100644 --- a/KeywordSearch/nbproject/project.xml +++ b/KeywordSearch/nbproject/project.xml @@ -1,415 +1,415 @@ - - - org.netbeans.modules.apisupport.project - - - org.sleuthkit.autopsy.keywordsearch - - - - org.netbeans.api.progress - - - - 1 - 1.24.1 - - - - org.netbeans.modules.javahelp - - - - 1 - 2.22.1 - - - - org.netbeans.modules.options.api - - - - 1 - 1.26.1 - - - - org.netbeans.modules.settings - - - - 1 - 1.31.1 - - - - org.openide.awt - - - - 7.31.1 - - - - org.openide.modules - - - - 7.23.1 - - - - org.openide.nodes - - - - 7.21.1 - - - - org.openide.util - - - - 8.15.1 - - - - org.openide.util.lookup - - - - 8.8.1 - - - - org.openide.windows - - - - 6.40.1 - - - - org.sleuthkit.autopsy.core - - - - 9 - 7.0 - - - - - org.apache.commons.lang - org.apache.commons.lang.builder - org.apache.commons.lang.enums - org.apache.commons.lang.exception - org.apache.commons.lang.math - org.apache.commons.lang.mutable - org.apache.commons.lang.text - org.apache.commons.lang.time - org.apache.commons.logging - org.apache.commons.logging.impl - org.apache.tika - org.apache.tika.config - org.apache.tika.detect - org.apache.tika.exception - org.apache.tika.extractor - org.apache.tika.fork - org.apache.tika.io - org.apache.tika.language - org.apache.tika.metadata - org.apache.tika.mime - org.apache.tika.parser - org.apache.tika.parser.asm - org.apache.tika.parser.audio - org.apache.tika.parser.chm - org.apache.tika.parser.chm.accessor - org.apache.tika.parser.chm.assertion - org.apache.tika.parser.chm.core - org.apache.tika.parser.chm.exception - org.apache.tika.parser.chm.lzx - org.apache.tika.parser.crypto - org.apache.tika.parser.dwg - org.apache.tika.parser.epub - org.apache.tika.parser.executable - org.apache.tika.parser.external - org.apache.tika.parser.feed - org.apache.tika.parser.font - org.apache.tika.parser.hdf - org.apache.tika.parser.html - org.apache.tika.parser.image - org.apache.tika.parser.image.xmp - org.apache.tika.parser.internal - org.apache.tika.parser.iptc - org.apache.tika.parser.iwork - org.apache.tika.parser.jpeg - org.apache.tika.parser.mail - org.apache.tika.parser.mbox - org.apache.tika.parser.microsoft - org.apache.tika.parser.microsoft.ooxml - org.apache.tika.parser.mp3 - org.apache.tika.parser.mp4 - org.apache.tika.parser.netcdf - org.apache.tika.parser.odf - org.apache.tika.parser.opendocument - org.apache.tika.parser.pdf - org.apache.tika.parser.pkg - org.apache.tika.parser.prt - org.apache.tika.parser.rtf - org.apache.tika.parser.txt - org.apache.tika.parser.video - org.apache.tika.parser.xml - org.apache.tika.sax - org.apache.tika.sax.xpath - org.apache.tika.utils - org.sleuthkit.autopsy.keywordsearch - - - ext/metadata-extractor-2.4.0-beta-1.jar - release/modules/ext/metadata-extractor-2.4.0-beta-1.jar - - - ext/commons-io-2.1.jar - release/modules/ext/commons-io-2.1.jar - - - ext/commons-lang-2.4.jar - release/modules/ext/commons-lang-2.4.jar - - - ext/log4j-1.2.17.jar - release/modules/ext/log4j-1.2.17.jar - - - ext/jcl-over-slf4j-1.6.4.jar - release/modules/ext/jcl-over-slf4j-1.6.4.jar - - - ext/asm-all-3.1.jar - release/modules/ext/asm-all-3.1.jar - - - ext/qdox-1.12.jar - release/modules/ext/qdox-1.12.jar - - - ext/org.apache.felix.scr.generator-1.1.2.jar - release/modules/ext/org.apache.felix.scr.generator-1.1.2.jar - - - ext/bcmail-jdk15-1.45.jar - release/modules/ext/bcmail-jdk15-1.45.jar - - - ext/vorbis-java-core-0.1-tests.jar - release/modules/ext/vorbis-java-core-0.1-tests.jar - - - ext/tika-parsers-1.2-javadoc.jar - release/modules/ext/tika-parsers-1.2-javadoc.jar - - - ext/log4j-over-slf4j-1.6.4.jar - release/modules/ext/log4j-over-slf4j-1.6.4.jar - - - ext/vorbis-java-tika-0.1.jar - release/modules/ext/vorbis-java-tika-0.1.jar - - - ext/isoparser-1.0-RC-1.jar - release/modules/ext/isoparser-1.0-RC-1.jar - - - ext/httpcore-4.1.4.jar - release/modules/ext/httpcore-4.1.4.jar - - - ext/tika-parsers-1.2-sources.jar - release/modules/ext/tika-parsers-1.2-sources.jar - - - ext/aspectjrt-1.6.11.jar - release/modules/ext/aspectjrt-1.6.11.jar - - - ext/commons-compress-1.4.1.jar - release/modules/ext/commons-compress-1.4.1.jar - - - ext/poi-3.8.jar - release/modules/ext/poi-3.8.jar - - - ext/tika-parsers-1.2.jar - release/modules/ext/tika-parsers-1.2.jar - - - ext/apache-mime4j-core-0.7.2.jar - release/modules/ext/apache-mime4j-core-0.7.2.jar - - - ext/rome-0.9.jar - release/modules/ext/rome-0.9.jar - - - ext/httpclient-4.1.3.jar - release/modules/ext/httpclient-4.1.3.jar - - - ext/icu4j-3.8.jar - release/modules/ext/icu4j-3.8.jar - - - ext/juniversalchardet-1.0.3.jar - release/modules/ext/juniversalchardet-1.0.3.jar - - - ext/pdfbox-1.7.0.jar - release/modules/ext/pdfbox-1.7.0.jar - - - ext/jericho-html-3.3-sources.jar - release/modules/ext/jericho-html-3.3-sources.jar - - - ext/jdom-1.0.jar - release/modules/ext/jdom-1.0.jar - - - ext/commons-logging-1.1.1.jar - release/modules/ext/commons-logging-1.1.1.jar - - - ext/tagsoup-1.2.1.jar - release/modules/ext/tagsoup-1.2.1.jar - - - ext/fontbox-1.7.0.jar - release/modules/ext/fontbox-1.7.0.jar - - - ext/poi-ooxml-3.8.jar - release/modules/ext/poi-ooxml-3.8.jar - - - ext/boilerpipe-1.1.0.jar - release/modules/ext/boilerpipe-1.1.0.jar - - - ext/org.osgi.compendium-4.0.0.jar - release/modules/ext/org.osgi.compendium-4.0.0.jar - - - ext/slf4j-api-1.7.2.jar - release/modules/ext/slf4j-api-1.7.2.jar - - - ext/commons-lang-2.4-javadoc.jar - release/modules/ext/commons-lang-2.4-javadoc.jar - - - ext/jempbox-1.7.0.jar - release/modules/ext/jempbox-1.7.0.jar - - - ext/jericho-html-3.3-javadoc.jar - release/modules/ext/jericho-html-3.3-javadoc.jar - - - ext/wstx-asl-3.2.7.jar - release/modules/ext/wstx-asl-3.2.7.jar - - - ext/netcdf-4.2-min.jar - release/modules/ext/netcdf-4.2-min.jar - - - ext/solr-solrj-4.0.0-javadoc.jar - release/modules/ext/solr-solrj-4.0.0-javadoc.jar - - - ext/xmlbeans-2.3.0.jar - release/modules/ext/xmlbeans-2.3.0.jar - - - ext/httpmime-4.1.3.jar - release/modules/ext/httpmime-4.1.3.jar - - - ext/org.osgi.core-4.0.0.jar - release/modules/ext/org.osgi.core-4.0.0.jar - - - ext/org.apache.felix.scr.annotations-1.6.0.jar - release/modules/ext/org.apache.felix.scr.annotations-1.6.0.jar - - - ext/commons-logging-api-1.1.jar - release/modules/ext/commons-logging-api-1.1.jar - - - ext/xz-1.0.jar - release/modules/ext/xz-1.0.jar - - - ext/commons-codec-1.7.jar - release/modules/ext/commons-codec-1.7.jar - - - ext/tika-core-1.2.jar - release/modules/ext/tika-core-1.2.jar - - - ext/zookeeper-3.3.6.jar - release/modules/ext/zookeeper-3.3.6.jar - - - ext/dom4j-1.6.1.jar - release/modules/ext/dom4j-1.6.1.jar - - - ext/poi-scratchpad-3.8.jar - release/modules/ext/poi-scratchpad-3.8.jar - - - ext/poi-ooxml-schemas-3.8.jar - release/modules/ext/poi-ooxml-schemas-3.8.jar - - - ext/bcprov-jdk15-1.45.jar - release/modules/ext/bcprov-jdk15-1.45.jar - - - ext/jericho-html-3.3.jar - release/modules/ext/jericho-html-3.3.jar - - - ext/solr-solrj-4.0.0.jar - release/modules/ext/solr-solrj-4.0.0.jar - - - ext/commons-lang-2.4-sources.jar - release/modules/ext/commons-lang-2.4-sources.jar - - - ext/solr-solrj-4.0.0-sources.jar - release/modules/ext/solr-solrj-4.0.0-sources.jar - - - ext/apache-mime4j-dom-0.7.2.jar - release/modules/ext/apache-mime4j-dom-0.7.2.jar - - - ext/geronimo-stax-api_1.0_spec-1.0.1.jar - release/modules/ext/geronimo-stax-api_1.0_spec-1.0.1.jar - - - ext/asm-3.1.jar - release/modules/ext/asm-3.1.jar - - - - + + + org.netbeans.modules.apisupport.project + + + org.sleuthkit.autopsy.keywordsearch + + + + org.netbeans.api.progress + + + + 1 + 1.24.1 + + + + org.netbeans.modules.javahelp + + + + 1 + 2.22.1 + + + + org.netbeans.modules.options.api + + + + 1 + 1.26.1 + + + + org.netbeans.modules.settings + + + + 1 + 1.31.1 + + + + org.openide.awt + + + + 7.31.1 + + + + org.openide.modules + + + + 7.23.1 + + + + org.openide.nodes + + + + 7.21.1 + + + + org.openide.util + + + + 8.15.1 + + + + org.openide.util.lookup + + + + 8.8.1 + + + + org.openide.windows + + + + 6.40.1 + + + + org.sleuthkit.autopsy.core + + + + 9 + 7.0 + + + + + org.apache.commons.lang + org.apache.commons.lang.builder + org.apache.commons.lang.enums + org.apache.commons.lang.exception + org.apache.commons.lang.math + org.apache.commons.lang.mutable + org.apache.commons.lang.text + org.apache.commons.lang.time + org.apache.commons.logging + org.apache.commons.logging.impl + org.apache.tika + org.apache.tika.config + org.apache.tika.detect + org.apache.tika.exception + org.apache.tika.extractor + org.apache.tika.fork + org.apache.tika.io + org.apache.tika.language + org.apache.tika.metadata + org.apache.tika.mime + org.apache.tika.parser + org.apache.tika.parser.asm + org.apache.tika.parser.audio + org.apache.tika.parser.chm + org.apache.tika.parser.chm.accessor + org.apache.tika.parser.chm.assertion + org.apache.tika.parser.chm.core + org.apache.tika.parser.chm.exception + org.apache.tika.parser.chm.lzx + org.apache.tika.parser.crypto + org.apache.tika.parser.dwg + org.apache.tika.parser.epub + org.apache.tika.parser.executable + org.apache.tika.parser.external + org.apache.tika.parser.feed + org.apache.tika.parser.font + org.apache.tika.parser.hdf + org.apache.tika.parser.html + org.apache.tika.parser.image + org.apache.tika.parser.image.xmp + org.apache.tika.parser.internal + org.apache.tika.parser.iptc + org.apache.tika.parser.iwork + org.apache.tika.parser.jpeg + org.apache.tika.parser.mail + org.apache.tika.parser.mbox + org.apache.tika.parser.microsoft + org.apache.tika.parser.microsoft.ooxml + org.apache.tika.parser.mp3 + org.apache.tika.parser.mp4 + org.apache.tika.parser.netcdf + org.apache.tika.parser.odf + org.apache.tika.parser.opendocument + org.apache.tika.parser.pdf + org.apache.tika.parser.pkg + org.apache.tika.parser.prt + org.apache.tika.parser.rtf + org.apache.tika.parser.txt + org.apache.tika.parser.video + org.apache.tika.parser.xml + org.apache.tika.sax + org.apache.tika.sax.xpath + org.apache.tika.utils + org.sleuthkit.autopsy.keywordsearch + + + ext/metadata-extractor-2.4.0-beta-1.jar + release/modules/ext/metadata-extractor-2.4.0-beta-1.jar + + + ext/commons-io-2.1.jar + release/modules/ext/commons-io-2.1.jar + + + ext/commons-lang-2.4.jar + release/modules/ext/commons-lang-2.4.jar + + + ext/log4j-1.2.17.jar + release/modules/ext/log4j-1.2.17.jar + + + ext/jcl-over-slf4j-1.6.4.jar + release/modules/ext/jcl-over-slf4j-1.6.4.jar + + + ext/asm-all-3.1.jar + release/modules/ext/asm-all-3.1.jar + + + ext/qdox-1.12.jar + release/modules/ext/qdox-1.12.jar + + + ext/org.apache.felix.scr.generator-1.1.2.jar + release/modules/ext/org.apache.felix.scr.generator-1.1.2.jar + + + ext/bcmail-jdk15-1.45.jar + release/modules/ext/bcmail-jdk15-1.45.jar + + + ext/vorbis-java-core-0.1-tests.jar + release/modules/ext/vorbis-java-core-0.1-tests.jar + + + ext/tika-parsers-1.2-javadoc.jar + release/modules/ext/tika-parsers-1.2-javadoc.jar + + + ext/log4j-over-slf4j-1.6.4.jar + release/modules/ext/log4j-over-slf4j-1.6.4.jar + + + ext/vorbis-java-tika-0.1.jar + release/modules/ext/vorbis-java-tika-0.1.jar + + + ext/isoparser-1.0-RC-1.jar + release/modules/ext/isoparser-1.0-RC-1.jar + + + ext/httpcore-4.1.4.jar + release/modules/ext/httpcore-4.1.4.jar + + + ext/tika-parsers-1.2-sources.jar + release/modules/ext/tika-parsers-1.2-sources.jar + + + ext/aspectjrt-1.6.11.jar + release/modules/ext/aspectjrt-1.6.11.jar + + + ext/commons-compress-1.4.1.jar + release/modules/ext/commons-compress-1.4.1.jar + + + ext/poi-3.8.jar + release/modules/ext/poi-3.8.jar + + + ext/tika-parsers-1.2.jar + release/modules/ext/tika-parsers-1.2.jar + + + ext/apache-mime4j-core-0.7.2.jar + release/modules/ext/apache-mime4j-core-0.7.2.jar + + + ext/rome-0.9.jar + release/modules/ext/rome-0.9.jar + + + ext/httpclient-4.1.3.jar + release/modules/ext/httpclient-4.1.3.jar + + + ext/icu4j-3.8.jar + release/modules/ext/icu4j-3.8.jar + + + ext/juniversalchardet-1.0.3.jar + release/modules/ext/juniversalchardet-1.0.3.jar + + + ext/pdfbox-1.7.0.jar + release/modules/ext/pdfbox-1.7.0.jar + + + ext/jericho-html-3.3-sources.jar + release/modules/ext/jericho-html-3.3-sources.jar + + + ext/jdom-1.0.jar + release/modules/ext/jdom-1.0.jar + + + ext/commons-logging-1.1.1.jar + release/modules/ext/commons-logging-1.1.1.jar + + + ext/tagsoup-1.2.1.jar + release/modules/ext/tagsoup-1.2.1.jar + + + ext/fontbox-1.7.0.jar + release/modules/ext/fontbox-1.7.0.jar + + + ext/poi-ooxml-3.8.jar + release/modules/ext/poi-ooxml-3.8.jar + + + ext/boilerpipe-1.1.0.jar + release/modules/ext/boilerpipe-1.1.0.jar + + + ext/org.osgi.compendium-4.0.0.jar + release/modules/ext/org.osgi.compendium-4.0.0.jar + + + ext/slf4j-api-1.7.2.jar + release/modules/ext/slf4j-api-1.7.2.jar + + + ext/commons-lang-2.4-javadoc.jar + release/modules/ext/commons-lang-2.4-javadoc.jar + + + ext/jempbox-1.7.0.jar + release/modules/ext/jempbox-1.7.0.jar + + + ext/jericho-html-3.3-javadoc.jar + release/modules/ext/jericho-html-3.3-javadoc.jar + + + ext/wstx-asl-3.2.7.jar + release/modules/ext/wstx-asl-3.2.7.jar + + + ext/netcdf-4.2-min.jar + release/modules/ext/netcdf-4.2-min.jar + + + ext/solr-solrj-4.0.0-javadoc.jar + release/modules/ext/solr-solrj-4.0.0-javadoc.jar + + + ext/xmlbeans-2.3.0.jar + release/modules/ext/xmlbeans-2.3.0.jar + + + ext/httpmime-4.1.3.jar + release/modules/ext/httpmime-4.1.3.jar + + + ext/org.osgi.core-4.0.0.jar + release/modules/ext/org.osgi.core-4.0.0.jar + + + ext/org.apache.felix.scr.annotations-1.6.0.jar + release/modules/ext/org.apache.felix.scr.annotations-1.6.0.jar + + + ext/commons-logging-api-1.1.jar + release/modules/ext/commons-logging-api-1.1.jar + + + ext/xz-1.0.jar + release/modules/ext/xz-1.0.jar + + + ext/commons-codec-1.7.jar + release/modules/ext/commons-codec-1.7.jar + + + ext/tika-core-1.2.jar + release/modules/ext/tika-core-1.2.jar + + + ext/zookeeper-3.3.6.jar + release/modules/ext/zookeeper-3.3.6.jar + + + ext/dom4j-1.6.1.jar + release/modules/ext/dom4j-1.6.1.jar + + + ext/poi-scratchpad-3.8.jar + release/modules/ext/poi-scratchpad-3.8.jar + + + ext/poi-ooxml-schemas-3.8.jar + release/modules/ext/poi-ooxml-schemas-3.8.jar + + + ext/bcprov-jdk15-1.45.jar + release/modules/ext/bcprov-jdk15-1.45.jar + + + ext/jericho-html-3.3.jar + release/modules/ext/jericho-html-3.3.jar + + + ext/solr-solrj-4.0.0.jar + release/modules/ext/solr-solrj-4.0.0.jar + + + ext/commons-lang-2.4-sources.jar + release/modules/ext/commons-lang-2.4-sources.jar + + + ext/solr-solrj-4.0.0-sources.jar + release/modules/ext/solr-solrj-4.0.0-sources.jar + + + ext/apache-mime4j-dom-0.7.2.jar + release/modules/ext/apache-mime4j-dom-0.7.2.jar + + + ext/geronimo-stax-api_1.0_spec-1.0.1.jar + release/modules/ext/geronimo-stax-api_1.0_spec-1.0.1.jar + + + ext/asm-3.1.jar + release/modules/ext/asm-3.1.jar + + + + diff --git a/KeywordSearch/release/solr/solr/conf/schema.xml b/KeywordSearch/release/solr/solr/conf/schema.xml index 203820992f..6a28005313 100644 --- a/KeywordSearch/release/solr/solr/conf/schema.xml +++ b/KeywordSearch/release/solr/solr/conf/schema.xml @@ -510,12 +510,13 @@ - + + @@ -555,7 +556,7 @@ - + + + + - - - - Keyword Search - - - - -

Keyword Search

-

- Autopsy ships a keyword search module, which provides the ingest capability - and also supports a manual text search mode. -

-

The keyword search ingest module extracts text from the files on the image being ingested and adds them to the index that can then be searched.

-

- Autopsy tries its best to extract maximum amount of text from the files being indexed. - First, the indexing will try to extract text from supported file formats, such as pure text file format, MS Office Documents, PDF files, Email files, and many others. - If the file is not supported by the standard text extractor, Autopsy will fallback to string extraction algorithm. - String extraction on unknown file formats or arbitrary binary files can often still extract a good amount of text from the file, often good enough to provide additional clues. - However, string extraction will not be able to extract text strings from binary files that have been encrypted. -

-

- Autopsy ships with some built-in lists that define regular expressions and enable user to search for Phone Numbers, IP addresses, URLs and E-mail addresses. - However, enabling some of these very general lists can produce a very large number of hits, many of them can be false-positives. -

-

- Once files are in the index, they can be searched quickly for specific keywords, regular expressions, - or using keyword search lists that can contain a mixture of keywords and regular expressions. - Search queries can be executed automatically by the ingest during the ingest run, or at the end of the ingest, depending on the current settings and the time it takes to ingest the image. -

-

Search queries can also be executed manually by the user at any time, as long as there are some files already indexed and ready to be searched.

-

- Keyword search module will save the search results regardless whether the search is performed by the ingest process, or manually by the user. - The saved results are available in the Directory Tree in the left hand side panel. -

-

- To see keyword search results in real-time while ingest is running, add keyword lists using the - Keyword Search Configuration Dialog - and select the "Use during ingest" check box. - You can select "Enable sending messages to inbox during ingest" per list, if the hits on that list should be reported in the Inbox, which is recommended for very specific searches. -

-

- See (Ingest) - for more information on ingest in general. -

-

- Once there are files in the index, the Keyword Search Bar - will be available for use to manually search at any time. -

- - - + + + + + Keyword Search + + + + +

Keyword Search

+

+ Autopsy ships a keyword search module, which provides the ingest capability + and also supports a manual text search mode. +

+

The keyword search ingest module extracts text from the files on the image being ingested and adds them to the index that can then be searched.

+

+ Autopsy tries its best to extract maximum amount of text from the files being indexed. + First, the indexing will try to extract text from supported file formats, such as pure text file format, MS Office Documents, PDF files, Email files, and many others. + If the file is not supported by the standard text extractor, Autopsy will fallback to string extraction algorithm. + String extraction on unknown file formats or arbitrary binary files can often still extract a good amount of text from the file, often good enough to provide additional clues. + However, string extraction will not be able to extract text strings from binary files that have been encrypted. +

+

+ Autopsy ships with some built-in lists that define regular expressions and enable user to search for Phone Numbers, IP addresses, URLs and E-mail addresses. + However, enabling some of these very general lists can produce a very large number of hits, many of them can be false-positives. +

+

+ Once files are in the index, they can be searched quickly for specific keywords, regular expressions, + or using keyword search lists that can contain a mixture of keywords and regular expressions. + Search queries can be executed automatically by the ingest during the ingest run, or at the end of the ingest, depending on the current settings and the time it takes to ingest the image. +

+

Search queries can also be executed manually by the user at any time, as long as there are some files already indexed and ready to be searched.

+

+ Keyword search module will save the search results regardless whether the search is performed by the ingest process, or manually by the user. + The saved results are available in the Directory Tree in the left hand side panel. +

+

+ To see keyword search results in real-time while ingest is running, add keyword lists using the + Keyword Search Configuration Dialog + and select the "Use during ingest" check box. + You can select "Send messages to inbox during ingest" per list, if the hits on that list should be reported in the Inbox, which is recommended for very specific searches. +

+

+ See (Ingest) + for more information on ingest in general. +

+

+ Once there are files in the index, the Keyword Search Bar + will be available for use to manually search at any time. +

+ + + diff --git a/NEWS.txt b/NEWS.txt index 0fd41075bf..848a5709da 100644 --- a/NEWS.txt +++ b/NEWS.txt @@ -1,3 +1,16 @@ + +---------------- VERSION 3.0.9 -------------- +Bug Fixes: +- Regular expression keyword search works on file names. + +Improvements: +- Enhanced reporting on keyword search module errors + + +---------------- VERSION 3.0.8 -------------- +Bug Fixes: +- Fixed installer bug on Windows. No other code changes. + ---------------- VERSION 3.0.7 -------------- New features: diff --git a/RecentActivity/manifest.mf b/RecentActivity/manifest.mf index 14b58804be..b6f0a41ec0 100644 --- a/RecentActivity/manifest.mf +++ b/RecentActivity/manifest.mf @@ -1,10 +1,10 @@ -Manifest-Version: 1.0 -OpenIDE-Module: org.sleuthkit.autopsy.recentactivity/5 -OpenIDE-Module-Implementation-Version: 9 -OpenIDE-Module-Layer: org/sleuthkit/autopsy/recentactivity/layer.xml -OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/recentactivity/Bundle.properties -OpenIDE-Module-Requires: - org.openide.modules.InstalledFileLocator, - org.openide.windows.TopComponent$Registry, - org.openide.windows.WindowManager - +Manifest-Version: 1.0 +OpenIDE-Module: org.sleuthkit.autopsy.recentactivity/5 +OpenIDE-Module-Implementation-Version: 9 +OpenIDE-Module-Layer: org/sleuthkit/autopsy/recentactivity/layer.xml +OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/recentactivity/Bundle.properties +OpenIDE-Module-Requires: + org.openide.modules.InstalledFileLocator, + org.openide.windows.TopComponent$Registry, + org.openide.windows.WindowManager + diff --git a/RecentActivity/nbproject/project.properties b/RecentActivity/nbproject/project.properties index 4ce77193f7..2cb871f415 100644 --- a/RecentActivity/nbproject/project.properties +++ b/RecentActivity/nbproject/project.properties @@ -1,7 +1,7 @@ -file.reference.gson-2.1.jar=release/modules/ext/gson-2.1.jar -javac.source=1.7 -javac.compilerargs=-Xlint -Xlint:-serial -license.file=../LICENSE-2.0.txt -nbm.homepage=http://www.sleuthkit.org/autopsy/ -nbm.needs.restart=true -spec.version.base=3.0 +file.reference.gson-2.1.jar=release/modules/ext/gson-2.1.jar +javac.source=1.7 +javac.compilerargs=-Xlint -Xlint:-serial +license.file=../LICENSE-2.0.txt +nbm.homepage=http://www.sleuthkit.org/autopsy/ +nbm.needs.restart=true +spec.version.base=3.0 diff --git a/RecentActivity/nbproject/project.xml b/RecentActivity/nbproject/project.xml index 8da413bbc9..78526a084c 100644 --- a/RecentActivity/nbproject/project.xml +++ b/RecentActivity/nbproject/project.xml @@ -1,52 +1,52 @@ - - - org.netbeans.modules.apisupport.project - - - org.sleuthkit.autopsy.recentactivity - - - - org.openide.awt - - - - 7.46.1 - - - - org.openide.modules - - - - 7.23.1 - - - - org.openide.nodes - - - - 7.21.1 - - - - org.sleuthkit.autopsy.core - - - - 9 - 7.0 - - - - - org.sleuthkit.autopsy.recentactivity - - - ext/gson-2.1.jar - release/modules/ext/gson-2.1.jar - - - - + + + org.netbeans.modules.apisupport.project + + + org.sleuthkit.autopsy.recentactivity + + + + org.openide.awt + + + + 7.46.1 + + + + org.openide.modules + + + + 7.23.1 + + + + org.openide.nodes + + + + 7.21.1 + + + + org.sleuthkit.autopsy.core + + + + 9 + 7.0 + + + + + org.sleuthkit.autopsy.recentactivity + + + ext/gson-2.1.jar + release/modules/ext/gson-2.1.jar + + + + diff --git a/RecentActivity/release/rr/plugins/arunmru.pl b/RecentActivity/release/rr/plugins/arunmru.pl index 8edea6e515..504700f145 100644 --- a/RecentActivity/release/rr/plugins/arunmru.pl +++ b/RecentActivity/release/rr/plugins/arunmru.pl @@ -47,7 +47,7 @@ sub pluginmain { my @vals = $key->get_list_of_values(); ::rptMsg(""); - ::rptMsg(""); + ::rptMsg("".gmtime($key->get_timestamp()).""); ::rptMsg(""); my %runvals; my $mru; @@ -75,4 +75,4 @@ sub pluginmain { } -1; \ No newline at end of file +1; diff --git a/RecentActivity/release/rr/plugins/autopsylogin.pl b/RecentActivity/release/rr/plugins/autopsylogin.pl index 5f83827176..ab0365817e 100644 --- a/RecentActivity/release/rr/plugins/autopsylogin.pl +++ b/RecentActivity/release/rr/plugins/autopsylogin.pl @@ -48,7 +48,7 @@ sub pluginmain { #::rptMsg("Logon User Name"); #::rptMsg($key_path); ::rptMsg(""); - ::rptMsg(""); + ::rptMsg("".gmtime($key->get_timestamp()).""); foreach my $v (@vals) { if ($v->get_name() eq $logon_name) { ::rptMsg(" ".$v->get_data() .""); @@ -67,4 +67,4 @@ sub pluginmain { } } -1; \ No newline at end of file +1; diff --git a/RecentActivity/release/rr/plugins/autopsy b/RecentActivity/release/rr/plugins/autopsyntuser similarity index 100% rename from RecentActivity/release/rr/plugins/autopsy rename to RecentActivity/release/rr/plugins/autopsyntuser diff --git a/RecentActivity/release/rr/plugins/autopsyrecentdocs.pl b/RecentActivity/release/rr/plugins/autopsyrecentdocs.pl index 538555ef8d..776126175b 100644 --- a/RecentActivity/release/rr/plugins/autopsyrecentdocs.pl +++ b/RecentActivity/release/rr/plugins/autopsyrecentdocs.pl @@ -49,7 +49,7 @@ sub pluginmain { #::rptMsg("RecentDocs"); #::rptMsg("**All values printed in MRUList\\MRUListEx order."); #::rptMsg($key_path); - ::rptMsg(""); + ::rptMsg("".gmtime($key->get_timestamp()).""); # Get RecentDocs values my %rdvals = getRDValues($key); if (%rdvals) { @@ -158,4 +158,4 @@ sub getRDValues { } } -1; \ No newline at end of file +1; diff --git a/RecentActivity/release/rr/plugins/autopsyshellfolders.pl b/RecentActivity/release/rr/plugins/autopsyshellfolders.pl index de3115f9dd..d625820ec5 100644 --- a/RecentActivity/release/rr/plugins/autopsyshellfolders.pl +++ b/RecentActivity/release/rr/plugins/autopsyshellfolders.pl @@ -48,7 +48,7 @@ sub pluginmain { my $key; if ($key = $root_key->get_subkey($key_path)) { ::rptMsg(""); - ::rptMsg(""); + ::rptMsg("".gmtime($key->get_timestamp()).""); my @vals = $key->get_list_of_values(); ::rptMsg(""); @@ -69,4 +69,4 @@ sub pluginmain { #::logMsg($key_path." not found."); } } -1; \ No newline at end of file +1; diff --git a/RecentActivity/release/rr/plugins/autopsyuninstall.pl b/RecentActivity/release/rr/plugins/autopsyuninstall.pl index 30fc0dcd74..d3f114dc5e 100644 --- a/RecentActivity/release/rr/plugins/autopsyuninstall.pl +++ b/RecentActivity/release/rr/plugins/autopsyuninstall.pl @@ -51,7 +51,7 @@ sub pluginmain { #::rptMsg($key_path); #::rptMsg(""); ::rptMsg(""); - ::rptMsg(""); + ::rptMsg("".gmtime($key->get_timestamp()).""); ::rptMsg(""); my %uninst; my @subkeys = $key->get_list_of_subkeys(); @@ -73,9 +73,9 @@ sub pluginmain { push(@{$uninst{$lastwrite}},$display); } foreach my $t (reverse sort {$a <=> $b} keys %uninst) { - #::rptMsg(""); + #::rptMsg(""); foreach my $item (@{$uninst{$t}}) { - ::rptMsg("" .$item.""); + ::rptMsg("" .$item.""); } #::rptMsg(""); } @@ -89,4 +89,4 @@ sub pluginmain { } ::rptMsg(""); } -1; \ No newline at end of file +1; diff --git a/RecentActivity/release/rr/plugins/autopsyusb.pl b/RecentActivity/release/rr/plugins/autopsyusb.pl index 9f5b97fdbd..3c6b788c09 100644 --- a/RecentActivity/release/rr/plugins/autopsyusb.pl +++ b/RecentActivity/release/rr/plugins/autopsyusb.pl @@ -59,7 +59,7 @@ sub pluginmain { my $key_path = $ccs."\\Enum\\USB"; my $key; if ($key = $root_key->get_subkey($key_path)) { - ::rptMsg(""); + ::rptMsg(""); my @subkeys = $key->get_list_of_subkeys(); if (scalar(@subkeys) > 0) { @@ -69,8 +69,8 @@ sub pluginmain { if (scalar(@sk) > 0) { foreach my $k (@sk) { my $serial = $k->get_name(); - my $sn_lw = $k->get_timestamp(); - my $str = $comp_name.",".$dev_class.",".$serial.",".$sn_lw; + my $mtime = $k->get_timestamp(); + my $str = $comp_name.",".$dev_class.",".$serial.",".$mtime; my $loc; eval { @@ -94,7 +94,7 @@ sub pluginmain { }; - ::rptMsg("" . $serial . ""); + ::rptMsg("" . $serial . ""); } } } @@ -110,4 +110,4 @@ sub pluginmain { #::logMsg($key_path." not found."); } } -1; \ No newline at end of file +1; diff --git a/RecentActivity/release/rr/plugins/autopsywinver.pl b/RecentActivity/release/rr/plugins/autopsywinver.pl index 73cb5a3017..758dc45b5c 100644 --- a/RecentActivity/release/rr/plugins/autopsywinver.pl +++ b/RecentActivity/release/rr/plugins/autopsywinver.pl @@ -32,7 +32,7 @@ sub pluginmain { my $reg = Parse::Win32Registry->new($hive); my $root_key = $reg->get_root_key; ::rptMsg(""); - ::rptMsg(""); + ::rptMsg(""); ::rptMsg(""); my $key_path = "Microsoft\\Windows NT\\CurrentVersion"; my $key; @@ -106,4 +106,4 @@ sub pluginmain { } ::rptMsg(""); } -1; \ No newline at end of file +1; diff --git a/RecentActivity/release/rr/plugins/officedocs.pl b/RecentActivity/release/rr/plugins/officedocs.pl index ad9495c407..c7ee407a7f 100644 --- a/RecentActivity/release/rr/plugins/officedocs.pl +++ b/RecentActivity/release/rr/plugins/officedocs.pl @@ -56,8 +56,8 @@ sub pluginmain { #::rptMsg("MSOffice version ".$version." located."); my $key_path = "Software\\Microsoft\\Office\\".$version; my $of_key = $root_key->get_subkey($key_path); + ::rptMsg(" ".gmtime($of_key->get_timestamp()).""); ::rptMsg(""); - ::rptMsg(""); if ($of_key) { # Attempt to retrieve Word docs my @funcs = ("Open","Save As","File Save"); @@ -148,4 +148,4 @@ sub pluginmain { ::rptMsg(""); } -1; \ No newline at end of file +1; diff --git a/RecentActivity/release/rr/plugins/officedocs2010.pl b/RecentActivity/release/rr/plugins/officedocs2010.pl index 632751196c..2783dc01f6 100644 --- a/RecentActivity/release/rr/plugins/officedocs2010.pl +++ b/RecentActivity/release/rr/plugins/officedocs2010.pl @@ -218,4 +218,4 @@ sub pluginmain { } } -1; \ No newline at end of file +1; diff --git a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/BrowserActivityType.java b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/BrowserActivityType.java index cd94f12c5b..a54977273a 100644 --- a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/BrowserActivityType.java +++ b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/BrowserActivityType.java @@ -24,8 +24,9 @@ import java.util.Map; /** * - * @author arivera + * No one seems to be using this */ +@Deprecated public enum BrowserActivityType { Cookies(0), Url(1), diff --git a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/BrowserType.java b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/BrowserType.java index 48c8e303bc..ebdf41f48d 100644 --- a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/BrowserType.java +++ b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/BrowserType.java @@ -23,8 +23,9 @@ import java.util.Map; /** * - * @author arivera + * No one is using this. It should go away */ +@Deprecated public enum BrowserType { IE(0), //Internet Explorer FF(1), //Firefox diff --git a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/Chrome.java b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/Chrome.java index 2d10921deb..849aae992e 100644 --- a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/Chrome.java +++ b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/Chrome.java @@ -24,8 +24,10 @@ package org.sleuthkit.autopsy.recentactivity; import com.google.gson.JsonArray; import com.google.gson.JsonElement; +import com.google.gson.JsonIOException; import com.google.gson.JsonObject; import com.google.gson.JsonParser; +import com.google.gson.JsonSyntaxException; import org.sleuthkit.autopsy.ingest.IngestServices; import org.sleuthkit.autopsy.datamodel.ContentUtils; import java.util.logging.Level; @@ -36,7 +38,6 @@ import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; import org.sleuthkit.autopsy.casemodule.services.FileManager; -import org.sleuthkit.autopsy.coreutils.EscapeUtil; import org.sleuthkit.autopsy.ingest.PipelineContext; import org.sleuthkit.autopsy.ingest.IngestDataSourceWorkerController; import org.sleuthkit.autopsy.ingest.IngestModuleDataSource; @@ -56,12 +57,13 @@ import org.sleuthkit.datamodel.TskData; */ public class Chrome extends Extract { - private static final String chquery = "SELECT urls.url, urls.title, urls.visit_count, urls.typed_count, " + private static final String historyQuery = "SELECT urls.url, urls.title, urls.visit_count, urls.typed_count, " + "last_visit_time, urls.hidden, visits.visit_time, (SELECT urls.url FROM urls WHERE urls.id=visits.url) as from_visit, visits.transition FROM urls, visits WHERE urls.id = visits.url"; - private static final String chcookiequery = "select name, value, host_key, expires_utc,last_access_utc, creation_utc from cookies"; - private static final String chbookmarkquery = "SELECT starred.title, urls.url, starred.date_added, starred.date_modified, urls.typed_count,urls._last_visit_time FROM starred INNER JOIN urls ON urls.id = starred.url_id"; - private static final String chdownloadquery = "select full_path, url, start_time, received_bytes from downloads"; - private static final String chloginquery = "select origin_url, username_value, signon_realm from logins"; + private static final String cookieQuery = "select name, value, host_key, expires_utc,last_access_utc, creation_utc from cookies"; + private static final String bookmarkQuery = "SELECT starred.title, urls.url, starred.date_added, starred.date_modified, urls.typed_count,urls._last_visit_time FROM starred INNER JOIN urls ON urls.id = starred.url_id"; + private static final String downloadQuery = "select full_path, url, start_time, received_bytes from downloads"; + private static final String downloadQueryVersion30 = "SELECT current_path as full_path, url, start_time, received_bytes FROM downloads, downloads_url_chains WHERE downloads.id=downloads_url_chains.id"; + private static final String loginQuery = "select origin_url, username_value, signon_realm from logins"; private final Logger logger = Logger.getLogger(this.getClass().getName()); public int ChromeCount = 0; final public static String MODULE_VERSION = "1.0"; @@ -80,6 +82,7 @@ public class Chrome extends Extract { @Override public void process(PipelineContextpipelineContext, Content dataSource, IngestDataSourceWorkerController controller) { + dataFound = false; this.getHistory(dataSource, controller); this.getBookmark(dataSource, controller); this.getCookie(dataSource, controller); @@ -87,6 +90,11 @@ public class Chrome extends Extract { this.getDownload(dataSource, controller); } + /** + * Query for history databases and add artifacts + * @param dataSource + * @param controller + */ private void getHistory(Content dataSource, IngestDataSourceWorkerController controller) { FileManager fileManager = currentCase.getServices().getFileManager(); @@ -94,7 +102,10 @@ public class Chrome extends Extract { try { historyFiles = fileManager.findFiles(dataSource, "History", "Chrome"); } catch (TskCoreException ex) { - logger.log(Level.SEVERE, "Error when trying to get Chrome history files.", ex); + String msg = "Error when trying to get Chrome history files."; + logger.log(Level.SEVERE, msg, ex); + this.addErrorMessage(this.getName() + ": " + msg); + return; } // get only the allocated ones, for now @@ -106,15 +117,16 @@ public class Chrome extends Extract { } // log a message if we don't have any allocated history files - if (allocatedHistoryFiles.size() == 0) { - logger.log(Level.INFO, "Could not find any allocated Chrome history files."); + if (allocatedHistoryFiles.isEmpty()) { + String msg = "Could not find any allocated Chrome history files."; + logger.log(Level.INFO, msg); return; } + dataFound = true; int j = 0; while (j < historyFiles.size()) { String temps = RAImageIngestModule.getRATempPath(currentCase, "chrome") + File.separator + historyFiles.get(j).getName().toString() + j + ".db"; - int errors = 0; final AbstractFile historyFile = historyFiles.get(j++); if (historyFile.getSize() == 0) { continue; @@ -132,33 +144,31 @@ public class Chrome extends Extract { break; } List> tempList = null; - tempList = this.dbConnect(temps, chquery); + tempList = this.dbConnect(temps, historyQuery); logger.log(Level.INFO, moduleName + "- Now getting history from " + temps + " with " + tempList.size() + "artifacts identified."); for (HashMap result : tempList) { Collection bbattributes = new ArrayList(); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "Recent Activity", ((result.get("url").toString() != null) ? result.get("url").toString() : ""))); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "Recent Activity", ((result.get("url").toString() != null) ? EscapeUtil.decodeURL(result.get("url").toString()) : ""))); - //TODO Revisit usage of deprecated constructor per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_LAST_ACCESSED.getTypeID(), "Recent Activity", "Last Visited", ((Long.valueOf(result.get("last_visit_time").toString())) / 10000000))); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "Recent Activity", ((Long.valueOf(result.get("last_visit_time").toString())) / 10000000))); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_REFERRER.getTypeID(), "Recent Activity", ((result.get("from_visit").toString() != null) ? result.get("from_visit").toString() : ""))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "Recent Activity", ((result.get("title").toString() != null) ? result.get("title").toString() : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_TITLE.getTypeID(), "Recent Activity", ((result.get("title").toString() != null) ? result.get("title").toString() : ""))); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "Recent Activity", "Chrome")); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "Recent Activity", (Util.extractDomain((result.get("url").toString() != null) ? result.get("url").toString() : "")))); this.addArtifact(ARTIFACT_TYPE.TSK_WEB_HISTORY, historyFile, bbattributes); } - if (errors > 0) { - this.addErrorMessage(this.getName() + ": Error parsing " + errors + " Chrome web history artifacts."); - } - dbFile.delete(); } services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_HISTORY)); } + /** + * Search for bookmark files and make artifacts. + * @param dataSource + * @param controller + */ private void getBookmark(Content dataSource, IngestDataSourceWorkerController controller) { FileManager fileManager = currentCase.getServices().getFileManager(); @@ -166,98 +176,124 @@ public class Chrome extends Extract { try { bookmarkFiles = fileManager.findFiles(dataSource, "Bookmarks", "Chrome"); } catch (TskCoreException ex) { - logger.log(Level.SEVERE, "Error when trying to get Chrome history files.", ex); + String msg = "Error when trying to get Chrome Bookmark files."; + logger.log(Level.SEVERE, msg, ex); + this.addErrorMessage(this.getName() + ": " + msg); + return; } + if (bookmarkFiles.isEmpty()) { + logger.log(Level.INFO, "Didn't find any Chrome bookmark files."); + return; + } + + dataFound = true; int j = 0; - if (bookmarkFiles != null && !bookmarkFiles.isEmpty()) { - while (j < bookmarkFiles.size()) { - AbstractFile bookmarkFile = bookmarkFiles.get(j++); - String temps = RAImageIngestModule.getRATempPath(currentCase, "chrome") + File.separator + bookmarkFile.getName().toString() + j + ".db"; - int errors = 0; - try { - ContentUtils.writeToFile(bookmarkFile, new File(temps)); - } catch (IOException ex) { - logger.log(Level.SEVERE, "Error writing temp sqlite db for Chrome bookmark artifacts.{0}", ex); - this.addErrorMessage(this.getName() + ": Error while trying to analyze file:" + bookmarkFile.getName()); + + while (j < bookmarkFiles.size()) { + AbstractFile bookmarkFile = bookmarkFiles.get(j++); + if (bookmarkFile.getSize() == 0) { + continue; + } + String temps = RAImageIngestModule.getRATempPath(currentCase, "chrome") + File.separator + bookmarkFile.getName().toString() + j + ".db"; + try { + ContentUtils.writeToFile(bookmarkFile, new File(temps)); + } catch (IOException ex) { + logger.log(Level.SEVERE, "Error writing temp sqlite db for Chrome bookmark artifacts.{0}", ex); + this.addErrorMessage(this.getName() + ": Error while trying to analyze file:" + bookmarkFile.getName()); + continue; + } + + logger.log(Level.INFO, moduleName + "- Now getting Bookmarks from " + temps); + File dbFile = new File(temps); + if (controller.isCancelled()) { + dbFile.delete(); + break; + } + + FileReader tempReader; + try { + tempReader = new FileReader(temps); + } catch (FileNotFoundException ex) { + logger.log(Level.SEVERE, "Error while trying to read into the Bookmarks for Chrome.", ex); + this.addErrorMessage(this.getName() + ": Error while trying to analyze file: " + bookmarkFile.getName()); + continue; + } + + final JsonParser parser = new JsonParser(); + JsonElement jsonElement; + JsonObject jElement, jRoot, jBookmark; + JsonArray jBookmarkArray; + + try { + jsonElement = parser.parse(tempReader); + jElement = jsonElement.getAsJsonObject(); + jRoot = jElement.get("roots").getAsJsonObject(); + jBookmark = jRoot.get("bookmark_bar").getAsJsonObject(); + jBookmarkArray = jBookmark.getAsJsonArray("children"); + } catch (JsonIOException | JsonSyntaxException | IllegalStateException ex) { + logger.log(Level.WARNING, "Error parsing Json from Chrome Bookmark.", ex); + this.addErrorMessage(this.getName() + ": Error while trying to analyze file: " + bookmarkFile.getName()); + continue; + } + + for (JsonElement result : jBookmarkArray) { + JsonObject address = result.getAsJsonObject(); + if (address == null) { continue; } - logger.log(Level.INFO, moduleName + "- Now getting Bookmarks from " + temps); - File dbFile = new File(temps); - if (controller.isCancelled()) { - dbFile.delete(); - break; + JsonElement urlEl = address.get("url"); + String url = null; + if (urlEl != null) { + url = urlEl.getAsString(); } + else { + url = ""; + } + String name = null; + JsonElement nameEl = address.get("name"); + if (nameEl != null) { + name = nameEl.getAsString(); + } + else { + name = ""; + } + Long date = null; + JsonElement dateEl = address.get("date_added"); + if (dateEl != null) { + date = dateEl.getAsLong(); + } + else { + date = Long.valueOf(0); + } + String domain = Util.extractDomain(url); try { - final JsonParser parser = new JsonParser(); - JsonElement jsonElement = parser.parse(new FileReader(temps)); - JsonObject jElement = jsonElement.getAsJsonObject(); - JsonObject jRoot = jElement.get("roots").getAsJsonObject(); - JsonObject jBookmark = jRoot.get("bookmark_bar").getAsJsonObject(); - JsonArray jBookmarkArray = jBookmark.getAsJsonArray("children"); - for (JsonElement result : jBookmarkArray) { - try { - JsonObject address = result.getAsJsonObject(); - if (address == null) { - continue; - } - JsonElement urlEl = address.get("url"); - String url = null; - if (urlEl != null) { - url = urlEl.getAsString(); - } - else { - url = ""; - } - String name = null; - JsonElement nameEl = address.get("name"); - if (nameEl != null) { - name = nameEl.getAsString(); - } - else { - name = ""; - } - Long date = null; - JsonElement dateEl = address.get("date_added"); - if (dateEl != null) { - date = dateEl.getAsLong(); - } - else { - date = Long.valueOf(0); - } - String domain = Util.extractDomain(url); - BlackboardArtifact bbart = bookmarkFile.newArtifact(ARTIFACT_TYPE.TSK_WEB_BOOKMARK); - Collection bbattributes = new ArrayList(); - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_LAST_ACCESSED.getTypeID(), "Recent Activity", "Last Visited", (date / 10000000))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "Recent Activity", (date / 10000000))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "Recent Activity", url)); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "Recent Activity", EscapeUtil.decodeURL(url))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "Recent Activity", name)); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "Recent Activity", "Chrome")); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "Recent Activity", domain)); - bbart.addAttributes(bbattributes); - } catch (TskCoreException ex) { - logger.log(Level.SEVERE, "Error while trying to insert Chrom bookmark artifact{0}", ex); - errors++; - } - } - if (errors > 0) { - this.addErrorMessage(this.getName() + ": Error parsing " + errors + " Chrome bookmark artifacts."); - } - } catch (FileNotFoundException ex) { - logger.log(Level.SEVERE, "Error while trying to read into the Bookmarks for Chrome." + ex); + BlackboardArtifact bbart = bookmarkFile.newArtifact(ARTIFACT_TYPE.TSK_WEB_BOOKMARK); + Collection bbattributes = new ArrayList(); + //TODO Revisit usage of deprecated constructor as per TSK-583 + //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_LAST_ACCESSED.getTypeID(), "Recent Activity", "Last Visited", (date / 10000000))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "Recent Activity", url)); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_TITLE.getTypeID(), "Recent Activity", name)); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_CREATED.getTypeID(), "Recent Activity", (date / 10000000))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "Recent Activity", "Chrome")); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "Recent Activity", domain)); + bbart.addAttributes(bbattributes); + } catch (TskCoreException ex) { + logger.log(Level.SEVERE, "Error while trying to insert Chrome bookmark artifact{0}", ex); + this.addErrorMessage(this.getName() + ": Error while trying to analyze file:" + bookmarkFile.getName()); } - - dbFile.delete(); } - - services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_BOOKMARK)); + dbFile.delete(); } + + services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_BOOKMARK)); } - //COOKIES section - // This gets the cookie info + /** + * Queries for cookie files and adds artifacts + * @param dataSource + * @param controller + */ private void getCookie(Content dataSource, IngestDataSourceWorkerController controller) { FileManager fileManager = currentCase.getServices().getFileManager(); @@ -265,125 +301,141 @@ public class Chrome extends Extract { try { cookiesFiles = fileManager.findFiles(dataSource, "Cookies", "Chrome"); } catch (TskCoreException ex) { - logger.log(Level.SEVERE, "Error when trying to get Chrome history files.", ex); + String msg = "Error when trying to get Chrome history files."; + logger.log(Level.SEVERE, msg, ex); + this.addErrorMessage(this.getName() + ": " + msg); + return; } - int j = 0; - if (cookiesFiles != null && !cookiesFiles.isEmpty()) { - while (j < cookiesFiles.size()) { - AbstractFile cookiesFile = cookiesFiles.get(j++); - String temps = RAImageIngestModule.getRATempPath(currentCase, "chrome") + File.separator + cookiesFile.getName().toString() + j + ".db"; - int errors = 0; - try { - ContentUtils.writeToFile(cookiesFile, new File(temps)); - } catch (IOException ex) { - logger.log(Level.SEVERE, "Error writing temp sqlite db for Chrome cookie artifacts.{0}", ex); - this.addErrorMessage(this.getName() + ": Error while trying to analyze file:" + cookiesFile.getName()); - continue; - } - File dbFile = new File(temps); - if (controller.isCancelled()) { - dbFile.delete(); - break; - } - - List> tempList = this.dbConnect(temps, chcookiequery); - logger.log(Level.INFO, moduleName + "- Now getting cookies from " + temps + " with " + tempList.size() + "artifacts identified."); - for (HashMap result : tempList) { - Collection bbattributes = new ArrayList(); - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "Recent Activity", "Title", ((result.get("name").toString() != null) ? result.get("name").toString() : ""))); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "Recent Activity", "Last Visited", ((Long.valueOf(result.get("last_access_utc").toString())) / 10000000))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "Recent Activity", ((result.get("name").toString() != null) ? result.get("name").toString() : ""))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "Recent Activity", ((Long.valueOf(result.get("last_access_utc").toString())) / 10000000))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_VALUE.getTypeID(), "Recent Activity", ((result.get("value").toString() != null) ? result.get("value").toString() : ""))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "Recent Activity", "Chrome")); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "Recent Activity", ((result.get("host_key").toString() != null) ? result.get("host_key").toString() : ""))); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "Recent Activity", ((result.get("host_key").toString() != null) ? EscapeUtil.decodeURL(result.get("host_key").toString()) : ""))); - String domain = result.get("host_key").toString(); - domain = domain.replaceFirst("^\\.+(?!$)", ""); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "Recent Activity", domain)); - this.addArtifact(ARTIFACT_TYPE.TSK_WEB_COOKIE, cookiesFile, bbattributes); - - } - if (errors > 0) { - this.addErrorMessage(this.getName() + ": Error parsing " + errors + " Chrome cookie artifacts."); - } + if (cookiesFiles.isEmpty()) { + logger.log(Level.INFO, "Didn't find any Chrome cookies files."); + return; + } + dataFound = true; + int j = 0; + while (j < cookiesFiles.size()) { + AbstractFile cookiesFile = cookiesFiles.get(j++); + if (cookiesFile.getSize() == 0) { + continue; + } + String temps = RAImageIngestModule.getRATempPath(currentCase, "chrome") + File.separator + cookiesFile.getName().toString() + j + ".db"; + try { + ContentUtils.writeToFile(cookiesFile, new File(temps)); + } catch (IOException ex) { + logger.log(Level.SEVERE, "Error writing temp sqlite db for Chrome cookie artifacts.{0}", ex); + this.addErrorMessage(this.getName() + ": Error while trying to analyze file:" + cookiesFile.getName()); + continue; + } + File dbFile = new File(temps); + if (controller.isCancelled()) { dbFile.delete(); + break; } - services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_COOKIE)); + List> tempList = this.dbConnect(temps, cookieQuery); + logger.log(Level.INFO, moduleName + "- Now getting cookies from " + temps + " with " + tempList.size() + "artifacts identified."); + for (HashMap result : tempList) { + Collection bbattributes = new ArrayList(); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "Recent Activity", ((result.get("host_key").toString() != null) ? result.get("host_key").toString() : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "Recent Activity", ((Long.valueOf(result.get("last_access_utc").toString())) / 10000000))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "Recent Activity", ((result.get("name").toString() != null) ? result.get("name").toString() : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_VALUE.getTypeID(), "Recent Activity", ((result.get("value").toString() != null) ? result.get("value").toString() : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "Recent Activity", "Chrome")); + String domain = result.get("host_key").toString(); + domain = domain.replaceFirst("^\\.+(?!$)", ""); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "Recent Activity", domain)); + this.addArtifact(ARTIFACT_TYPE.TSK_WEB_COOKIE, cookiesFile, bbattributes); + } + + dbFile.delete(); } + + services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_COOKIE)); } - //Downloads section - // This gets the downloads info + /** + * Queries for download files and adds artifacts + * @param dataSource + * @param controller + */ private void getDownload(Content dataSource, IngestDataSourceWorkerController controller) { FileManager fileManager = currentCase.getServices().getFileManager(); - List historyFiles = null; + List downloadFiles = null; try { - historyFiles = fileManager.findFiles(dataSource, "History", "Chrome"); + downloadFiles = fileManager.findFiles(dataSource, "History", "Chrome"); } catch (TskCoreException ex) { - logger.log(Level.SEVERE, "Error when trying to get Chrome history files.", ex); + String msg = "Error when trying to get Chrome history files."; + logger.log(Level.SEVERE, msg, ex); + this.addErrorMessage(this.getName() + ": " + msg); + return; } + if (downloadFiles.isEmpty()) { + logger.log(Level.INFO, "Didn't find any Chrome download files."); + return; + } + + dataFound = true; int j = 0; - if (historyFiles != null && !historyFiles.isEmpty()) { - while (j < historyFiles.size()) { - AbstractFile historyFile = historyFiles.get(j++); - if (historyFile.getSize() == 0) { - continue; - } - String temps = RAImageIngestModule.getRATempPath(currentCase, "chrome") + File.separator + historyFile.getName().toString() + j + ".db"; - int errors = 0; - try { - ContentUtils.writeToFile(historyFile, new File(temps)); - } catch (IOException ex) { - logger.log(Level.SEVERE, "Error writing temp sqlite db for Chrome download artifacts.{0}", ex); - this.addErrorMessage(this.getName() + ": Error while trying to analyze file:" + historyFile.getName()); - continue; - } - File dbFile = new File(temps); - if (controller.isCancelled()) { - dbFile.delete(); - break; - } - - List> tempList = this.dbConnect(temps, chdownloadquery); - logger.log(Level.INFO, moduleName + "- Now getting downloads from " + temps + " with " + tempList.size() + "artifacts identified."); - for (HashMap result : tempList) { - Collection bbattributes = new ArrayList(); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PATH.getTypeID(), "Recent Activity", (result.get("full_path").toString()))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PATH_ID.getTypeID(), "Recent Activity", Util.findID(dataSource, (result.get("full_path").toString())))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "Recent Activity", ((result.get("url").toString() != null) ? result.get("url").toString() : ""))); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "Recent Activity", ((result.get("url").toString() != null) ? EscapeUtil.decodeURL(result.get("url").toString()) : ""))); - Long time = (Long.valueOf(result.get("start_time").toString())); - String Tempdate = time.toString(); - time = Long.valueOf(Tempdate) / 10000000; - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_LAST_ACCESSED.getTypeID(), "Recent Activity", "Last Visited", time)); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "Recent Activity", time)); - String domain = Util.extractDomain((result.get("url").toString() != null) ? result.get("url").toString() : ""); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "Recent Activity", domain)); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "Recent Activity", "Chrome")); - this.addArtifact(ARTIFACT_TYPE.TSK_WEB_DOWNLOAD, historyFile, bbattributes); - - } - if (errors > 0) { - this.addErrorMessage(this.getName() + ": Error parsing " + errors + " Chrome download artifacts."); - } - + while (j < downloadFiles.size()) { + AbstractFile downloadFile = downloadFiles.get(j++); + if (downloadFile.getSize() == 0) { + continue; + } + String temps = RAImageIngestModule.getRATempPath(currentCase, "chrome") + File.separator + downloadFile.getName().toString() + j + ".db"; + try { + ContentUtils.writeToFile(downloadFile, new File(temps)); + } catch (IOException ex) { + logger.log(Level.SEVERE, "Error writing temp sqlite db for Chrome download artifacts.{0}", ex); + this.addErrorMessage(this.getName() + ": Error while trying to analyze file:" + downloadFile.getName()); + continue; + } + File dbFile = new File(temps); + if (controller.isCancelled()) { dbFile.delete(); + break; } - services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_DOWNLOAD)); + List> tempList = null; + + if (isChromePreVersion30(temps)) { + tempList = this.dbConnect(temps, downloadQuery); + } else { + tempList = this.dbConnect(temps, downloadQueryVersion30); + } + + logger.log(Level.INFO, moduleName + "- Now getting downloads from " + temps + " with " + tempList.size() + "artifacts identified."); + for (HashMap result : tempList) { + Collection bbattributes = new ArrayList(); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PATH.getTypeID(), "Recent Activity", (result.get("full_path").toString()))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PATH_ID.getTypeID(), "Recent Activity", Util.findID(dataSource, (result.get("full_path").toString())))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "Recent Activity", ((result.get("url").toString() != null) ? result.get("url").toString() : ""))); + //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "Recent Activity", ((result.get("url").toString() != null) ? EscapeUtil.decodeURL(result.get("url").toString()) : ""))); + Long time = (Long.valueOf(result.get("start_time").toString())); + String Tempdate = time.toString(); + time = Long.valueOf(Tempdate) / 10000000; + //TODO Revisit usage of deprecated constructor as per TSK-583 + //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_LAST_ACCESSED.getTypeID(), "Recent Activity", "Last Visited", time)); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "Recent Activity", time)); + String domain = Util.extractDomain((result.get("url").toString() != null) ? result.get("url").toString() : ""); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "Recent Activity", domain)); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "Recent Activity", "Chrome")); + this.addArtifact(ARTIFACT_TYPE.TSK_WEB_DOWNLOAD, downloadFile, bbattributes); + } + + dbFile.delete(); } + + services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_DOWNLOAD)); } - //Login/Password section - // This gets the user info + /** + * Queries for login files and adds artifacts + * @param dataSource + * @param controller + */ private void getLogin(Content dataSource, IngestDataSourceWorkerController controller) { FileManager fileManager = currentCase.getServices().getFileManager(); @@ -391,54 +443,59 @@ public class Chrome extends Extract { try { signonFiles = fileManager.findFiles(dataSource, "signons.sqlite", "Chrome"); } catch (TskCoreException ex) { - logger.log(Level.SEVERE, "Error when trying to get Chrome history files.", ex); + String msg = "Error when trying to get Chrome history files."; + logger.log(Level.SEVERE, msg, ex); + this.addErrorMessage(this.getName() + ": " + msg); + return; } + if (signonFiles.isEmpty()) { + logger.log(Level.INFO, "Didn't find any Chrome signon files."); + return; + } + + dataFound = true; int j = 0; - if (signonFiles != null && !signonFiles.isEmpty()) { - while (j < signonFiles.size()) { - AbstractFile signonFile = signonFiles.get(j++); - String temps = RAImageIngestModule.getRATempPath(currentCase, "chrome") + File.separator + signonFile.getName().toString() + j + ".db"; - int errors = 0; - try { - ContentUtils.writeToFile(signonFile, new File(temps)); - } catch (IOException ex) { - logger.log(Level.SEVERE, "Error writing temp sqlite db for Chrome login artifacts.{0}", ex); - this.addErrorMessage(this.getName() + ": Error while trying to analyze file:" + signonFile.getName()); - continue; - } - File dbFile = new File(temps); - if (controller.isCancelled()) { - dbFile.delete(); - break; - } - List> tempList = this.dbConnect(temps, chloginquery); - logger.log(Level.INFO, moduleName + "- Now getting login information from " + temps + " with " + tempList.size() + "artifacts identified."); - for (HashMap result : tempList) { - Collection bbattributes = new ArrayList(); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "Recent Activity", ((result.get("origin_url").toString() != null) ? result.get("origin_url").toString() : ""))); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "Recent Activity", ((result.get("origin_url").toString() != null) ? EscapeUtil.decodeURL(result.get("origin_url").toString()) : ""))); - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "Recent Activity", "Last Visited", ((Long.valueOf(result.get("last_visit_time").toString())) / 1000000))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "Recent Activity", ((Long.valueOf(result.get("last_visit_time").toString())) / 1000000))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_REFERRER.getTypeID(), "Recent Activity", ((result.get("from_visit").toString() != null) ? result.get("from_visit").toString() : ""))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "Recent Activity", ((result.get("title").toString() != null) ? result.get("title").toString() : ""))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "Recent Activity", "Chrome")); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "Recent Activity", (Util.extractDomain((result.get("origin_url").toString() != null) ? result.get("url").toString() : "")))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_USER_NAME.getTypeID(), "Recent Activity", ((result.get("username_value").toString() != null) ? result.get("username_value").toString().replaceAll("'", "''") : ""))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "Recent Activity", result.get("signon_realm").toString())); - this.addArtifact(ARTIFACT_TYPE.TSK_WEB_HISTORY, signonFile, bbattributes); - - } - if (errors > 0) { - this.addErrorMessage(this.getName() + ": Error parsing " + errors + " Chrome login artifacts."); - } - + while (j < signonFiles.size()) { + AbstractFile signonFile = signonFiles.get(j++); + if (signonFile.getSize() == 0) { + continue; + } + String temps = RAImageIngestModule.getRATempPath(currentCase, "chrome") + File.separator + signonFile.getName().toString() + j + ".db"; + try { + ContentUtils.writeToFile(signonFile, new File(temps)); + } catch (IOException ex) { + logger.log(Level.SEVERE, "Error writing temp sqlite db for Chrome login artifacts.{0}", ex); + this.addErrorMessage(this.getName() + ": Error while trying to analyze file:" + signonFile.getName()); + continue; + } + File dbFile = new File(temps); + if (controller.isCancelled()) { dbFile.delete(); + break; + } + List> tempList = this.dbConnect(temps, loginQuery); + logger.log(Level.INFO, moduleName + "- Now getting login information from " + temps + " with " + tempList.size() + "artifacts identified."); + for (HashMap result : tempList) { + Collection bbattributes = new ArrayList(); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "Recent Activity", ((result.get("origin_url").toString() != null) ? result.get("origin_url").toString() : ""))); + //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "Recent Activity", ((result.get("origin_url").toString() != null) ? EscapeUtil.decodeURL(result.get("origin_url").toString()) : ""))); + //TODO Revisit usage of deprecated constructor as per TSK-583 + //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "Recent Activity", "Last Visited", ((Long.valueOf(result.get("last_visit_time").toString())) / 1000000))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "Recent Activity", ((Long.valueOf(result.get("last_visit_time").toString())) / 1000000))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_REFERRER.getTypeID(), "Recent Activity", ((result.get("from_visit").toString() != null) ? result.get("from_visit").toString() : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "Recent Activity", ((result.get("title").toString() != null) ? result.get("title").toString() : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "Recent Activity", "Chrome")); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "Recent Activity", (Util.extractDomain((result.get("origin_url").toString() != null) ? result.get("url").toString() : "")))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_USER_NAME.getTypeID(), "Recent Activity", ((result.get("username_value").toString() != null) ? result.get("username_value").toString().replaceAll("'", "''") : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "Recent Activity", result.get("signon_realm").toString())); + this.addArtifact(ARTIFACT_TYPE.TSK_WEB_HISTORY, signonFile, bbattributes); } - services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_HISTORY)); + dbFile.delete(); } + + services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_HISTORY)); } @Override @@ -448,12 +505,10 @@ public class Chrome extends Extract { @Override public void complete() { - logger.info("Chrome Extract has completed"); } @Override public void stop() { - logger.info("Attempted to stop chrome extract, but operation is not supported; skipping..."); } @Override @@ -466,4 +521,16 @@ public class Chrome extends Extract { public boolean hasBackgroundJobsRunning() { return false; } + + private boolean isChromePreVersion30(String temps) { + String query = "PRAGMA table_info(downloads)"; + List> columns = this.dbConnect(temps, query); + for (HashMap col : columns) { + if (col.get("name").equals("url")) { + return true; + } + } + + return false; + } } diff --git a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/Extract.java b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/Extract.java index b3a5a5fe2b..8f3bce5716 100644 --- a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/Extract.java +++ b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/Extract.java @@ -40,10 +40,11 @@ abstract public class Extract extends IngestModuleDataSource{ public final Logger logger = Logger.getLogger(this.getClass().getName()); protected final ArrayList errorMessages = new ArrayList<>(); protected String moduleName = ""; + protected boolean dataFound = false; //hide public constructor to prevent from instantiation by ingest module loader Extract() { - + dataFound = false; } /** @@ -103,6 +104,7 @@ abstract public class Extract extends IngestModuleDataSource{ tempdbconnect.closeConnection(); } catch (SQLException ex) { logger.log(Level.SEVERE, "Error while trying to read into a sqlite db." + connectionString, ex); + errorMessages.add(getName() + ": Failed to query database."); return Collections.>emptyList(); } return list; @@ -142,4 +144,8 @@ abstract public class Extract extends IngestModuleDataSource{ public String getName() { return moduleName; } + + public boolean foundData() { + return dataFound; + } } \ No newline at end of file diff --git a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/ExtractIE.java b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/ExtractIE.java index bbf876a7c6..980c48aba4 100644 --- a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/ExtractIE.java +++ b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/ExtractIE.java @@ -46,9 +46,9 @@ import java.util.regex.Pattern; // TSK Imports import org.openide.modules.InstalledFileLocator; import org.sleuthkit.autopsy.casemodule.Case; -import org.sleuthkit.autopsy.coreutils.EscapeUtil; import org.sleuthkit.autopsy.coreutils.JLNK; import org.sleuthkit.autopsy.coreutils.JLnkParser; +import org.sleuthkit.autopsy.coreutils.JLnkParserException; import org.sleuthkit.autopsy.datamodel.ContentUtils; import org.sleuthkit.autopsy.ingest.IngestDataSourceWorkerController; import org.sleuthkit.autopsy.ingest.IngestServices; @@ -73,9 +73,6 @@ public class ExtractIE extends Extract { private String PASCO_LIB_PATH; private String JAVA_PATH; - // List of Pasco result files for this data source - private List pascoResults; - boolean pascoFound = false; final public static String MODULE_VERSION = "1.0"; private static final SimpleDateFormat dateFormatter = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'"); @@ -96,69 +93,41 @@ public class ExtractIE extends Extract { @Override public void process(PipelineContextpipelineContext, Content dataSource, IngestDataSourceWorkerController controller) { - /* @@@ BC: All of these try / catches are added because the exception - * handing in here isn't the best. We were losing results before because - * cookies was throwing an exceptionb ecause of an invalid URL and we - * skipped the rest of the data types. Need to push this eror handling - * further down though. - */ - - try { - this.extractAndRunPasco(dataSource, controller); - } - catch (Exception e) { - logger.log(Level.SEVERE, "Error extracting IE index.dat files", e); - addErrorMessage("Error extracting and analyzing IE index.dat files"); - } - - try { - this.getBookmark(dataSource, controller); - } - catch (Exception e) { - logger.log(Level.SEVERE, "Error parsing IE bookmarks", e); - addErrorMessage("Error parsing IE bookmarks"); - } - - try { - this.getCookie(dataSource, controller); - } - catch (Exception e) { - logger.log(Level.SEVERE, "Error parsing IE cookies", e); - addErrorMessage("Error parsing IE Cookies"); - } - - try { - this.getRecentDocuments(dataSource, controller); - } - catch (Exception e) { - logger.log(Level.SEVERE, "Error parsing IE Recent Docs", e); - addErrorMessage("Error parsing IE Recent Documents"); - } - - try { - this.getHistory(pascoResults); - } - catch (Exception e) { - logger.log(Level.SEVERE, "Error parsing IE History", e); - addErrorMessage("Error parsing IE History"); - } + dataFound = false; + this.getBookmark(dataSource, controller); + this.getCookie(dataSource, controller); + this.getRecentDocuments(dataSource, controller); + this.getHistory(dataSource, controller); } - //Favorites section - // This gets the favorite info - private void getBookmark(Content dataSource, IngestDataSourceWorkerController controller) { - - int errors = 0; - + /** + * Finds the files storing bookmarks and creates artifacts + * @param dataSource + * @param controller + */ + private void getBookmark(Content dataSource, IngestDataSourceWorkerController controller) { org.sleuthkit.autopsy.casemodule.services.FileManager fileManager = currentCase.getServices().getFileManager(); List favoritesFiles = null; try { favoritesFiles = fileManager.findFiles(dataSource, "%.url", "Favorites"); } catch (TskCoreException ex) { - logger.log(Level.WARNING, "Error fetching 'index.data' files for Internet Explorer history."); + logger.log(Level.WARNING, "Error fetching 'url' files for Internet Explorer bookmarks.", ex); + this.addErrorMessage(this.getName() + ": Error getting Internet Explorer Bookmarks."); + return; } + if (favoritesFiles.isEmpty()) { + logger.log(Level.INFO, "Didn't find any IE bookmark files."); + return; + } + + dataFound = true; for (AbstractFile favoritesFile : favoritesFiles) { + if (favoritesFile.getSize() == 0) { + continue; + } + + // @@@ WHY DON"T WE PARSE THIS FILE more intelligently. It's text-based if (controller.isCancelled()) { break; } @@ -168,6 +137,8 @@ public class ExtractIE extends Extract { final int bytesRead = fav.read(t, 0, fav.getSize()); } catch (TskCoreException ex) { logger.log(Level.SEVERE, "Error reading bytes of Internet Explorer favorite.", ex); + this.addErrorMessage(this.getName() + ": Error reading Internet Explorer Bookmark file " + favoritesFile.getName()); + continue; } String bookmarkString = new String(t); String re1 = ".*?"; // Non-greedy match on filler @@ -185,46 +156,53 @@ public class ExtractIE extends Extract { String domain = Util.extractDomain(url); Collection bbattributes = new ArrayList(); - //TODO revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_LAST_ACCESSED.getTypeID(), "RecentActivity", "Last Visited", datetime)); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "RecentActivity", datetime)); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "RecentActivity", url)); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "RecentActivity", EscapeUtil.decodeURL(url))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "RecentActivity", name)); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_TITLE.getTypeID(), "RecentActivity", name)); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_CREATED.getTypeID(), "RecentActivity", datetime)); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", "Internet Explorer")); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "RecentActivity", domain)); this.addArtifact(ARTIFACT_TYPE.TSK_WEB_BOOKMARK, favoritesFile, bbattributes); - - services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_BOOKMARK)); - } - if (errors > 0) { - this.addErrorMessage(this.getName() + ": Error parsing " + errors + " Internet Explorer favorites."); } + services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_BOOKMARK)); } - //Cookies section - // This gets the cookies info - private void getCookie(Content dataSource, IngestDataSourceWorkerController controller) { - + /** + * Finds files that store cookies and adds artifacts for them. + * @param dataSource + * @param controller + */ + private void getCookie(Content dataSource, IngestDataSourceWorkerController controller) { org.sleuthkit.autopsy.casemodule.services.FileManager fileManager = currentCase.getServices().getFileManager(); List cookiesFiles = null; try { cookiesFiles = fileManager.findFiles(dataSource, "%.txt", "Cookies"); } catch (TskCoreException ex) { - logger.log(Level.WARNING, "Error fetching 'index.data' files for Internet Explorer history."); + logger.log(Level.WARNING, "Error getting cookie files for IE"); + this.addErrorMessage(this.getName() + ": " + "Error getting Internet Explorer cookie files."); + return; } - int errors = 0; + if (cookiesFiles.isEmpty()) { + logger.log(Level.INFO, "Didn't find any IE cookies files."); + return; + } + + dataFound = true; for (AbstractFile cookiesFile : cookiesFiles) { if (controller.isCancelled()) { break; } - Content fav = cookiesFile; - byte[] t = new byte[(int) fav.getSize()]; + if (cookiesFile.getSize() == 0) { + continue; + } + + byte[] t = new byte[(int) cookiesFile.getSize()]; try { - final int bytesRead = fav.read(t, 0, fav.getSize()); + final int bytesRead = cookiesFile.read(t, 0, cookiesFile.getSize()); } catch (TskCoreException ex) { logger.log(Level.SEVERE, "Error reading bytes of Internet Explorer cookie.", ex); + this.addErrorMessage(this.getName() + ": Error reading Internet Explorer cookie " + cookiesFile.getName()); + continue; } String cookieString = new String(t); String[] values = cookieString.split("\n"); @@ -232,33 +210,27 @@ public class ExtractIE extends Extract { String value = values.length > 1 ? values[1] : ""; String name = values.length > 0 ? values[0] : ""; Long datetime = cookiesFile.getCrtime(); - String Tempdate = datetime.toString(); - datetime = Long.valueOf(Tempdate); + String tempDate = datetime.toString(); + datetime = Long.valueOf(tempDate); String domain = Util.extractDomain(url); Collection bbattributes = new ArrayList(); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "RecentActivity", url)); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "RecentActivity", EscapeUtil.decodeURL(url))); - //TODO Revisit usage of deprecated Constructor as of TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", "Last Visited", datetime)); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", datetime)); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_VALUE.getTypeID(), "RecentActivity", value)); - //TODO Revisit usage of deprecated Constructor as of TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "RecentActivity", "Title", (name != null) ? name : "")); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "RecentActivity", (name != null) ? name : "")); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_VALUE.getTypeID(), "RecentActivity", value)); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", "Internet Explorer")); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "RecentActivity", domain)); this.addArtifact(ARTIFACT_TYPE.TSK_WEB_COOKIE, cookiesFile, bbattributes); } - if (errors > 0) { - this.addErrorMessage(this.getName() + ": Error parsing " + errors + " Internet Explorer cookies."); - } - services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_COOKIE)); } - //Recent Documents section - // This gets the recent object info + /** + * Find the documents that Windows stores about recent documents and make artifacts. + * @param dataSource + * @param controller + */ private void getRecentDocuments(Content dataSource, IngestDataSourceWorkerController controller) { org.sleuthkit.autopsy.casemodule.services.FileManager fileManager = currentCase.getServices().getFileManager(); @@ -266,59 +238,66 @@ public class ExtractIE extends Extract { try { recentFiles = fileManager.findFiles(dataSource, "%.lnk", "Recent"); } catch (TskCoreException ex) { - logger.log(Level.WARNING, "Error fetching 'index.data' files for Internet Explorer history."); + logger.log(Level.WARNING, "Error searching for .lnk files."); + this.addErrorMessage(this.getName() + ": Error getting lnk Files."); + return; } + if (recentFiles.isEmpty()) { + logger.log(Level.INFO, "Didn't find any IE recent files."); + return; + } + + dataFound = true; for (AbstractFile recentFile : recentFiles) { if (controller.isCancelled()) { break; } - Content fav = recentFile; - if (fav.getSize() == 0) { + + if (recentFile.getSize() == 0) { continue; } JLNK lnk = null; - JLnkParser lnkParser = new JLnkParser(new ReadContentInputStream(fav), (int) fav.getSize()); + JLnkParser lnkParser = new JLnkParser(new ReadContentInputStream(recentFile), (int) recentFile.getSize()); try { lnk = lnkParser.parse(); - } - catch (Exception e) { + } catch (JLnkParserException e) { //TODO should throw a specific checked exception - logger.log(Level.SEVERE, "Error lnk parsing the file to get recent files" + recentFile); + boolean unalloc = recentFile.isMetaFlagSet(TskData.TSK_FS_META_FLAG_ENUM.UNALLOC) + || recentFile.isDirNameFlagSet(TskData.TSK_FS_NAME_FLAG_ENUM.UNALLOC); + if (unalloc == false) { + logger.log(Level.SEVERE, "Error lnk parsing the file to get recent files" + recentFile, e); + this.addErrorMessage(this.getName() + ": Error parsing Recent File " + recentFile.getName()); + } continue; } - String path = lnk.getBestPath(); - Long datetime = recentFile.getCrtime(); - + Collection bbattributes = new ArrayList(); + String path = lnk.getBestPath(); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PATH.getTypeID(), "RecentActivity", path)); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "RecentActivity", Util.getFileName(path))); - long id = Util.findID(dataSource, path); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PATH_ID.getTypeID(), "RecentActivity", id)); - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", "Date Created", datetime)); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", datetime)); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PATH_ID.getTypeID(), "RecentActivity", Util.findID(dataSource, path))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", recentFile.getCrtime())); this.addArtifact(ARTIFACT_TYPE.TSK_RECENT_OBJECT, recentFile, bbattributes); } - services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_RECENT_OBJECT)); } - - - private void extractAndRunPasco(Content dataSource, IngestDataSourceWorkerController controller) { - pascoResults = new ArrayList(); - + + /** + * Locates index.dat files, runs Pasco on them, and creates artifacts. + * @param dataSource + * @param controller + */ + private void getHistory(Content dataSource, IngestDataSourceWorkerController controller) { logger.log(Level.INFO, "Pasco results path: " + moduleTempResultsDir); + boolean foundHistory = false; final File pascoRoot = InstalledFileLocator.getDefault().locate("pasco2", ExtractIE.class.getPackage().getName(), false); if (pascoRoot == null) { - logger.log(Level.SEVERE, "Pasco2 not found"); - pascoFound = false; + this.addErrorMessage(this.getName() + ": Unable to get IE History: pasco not found"); + logger.log(Level.SEVERE, "Error finding pasco program "); return; - } else { - pascoFound = true; - } - + } + final String pascoHome = pascoRoot.getAbsolutePath(); logger.log(Level.INFO, "Pasco2 home: " + pascoHome); @@ -328,16 +307,24 @@ public class ExtractIE extends Extract { File resultsDir = new File(moduleTempResultsDir); resultsDir.mkdirs(); - // get index.dat files org.sleuthkit.autopsy.casemodule.services.FileManager fileManager = currentCase.getServices().getFileManager(); List indexFiles = null; try { indexFiles = fileManager.findFiles(dataSource, "index.dat"); } catch (TskCoreException ex) { + this.addErrorMessage(this.getName() + ": Error getting Internet Explorer history files"); logger.log(Level.WARNING, "Error fetching 'index.data' files for Internet Explorer history."); + return; } + if (indexFiles.isEmpty()) { + String msg = "No InternetExplorer history files found."; + logger.log(Level.INFO, msg); + return; + } + + dataFound = true; String temps; String indexFileName; for (AbstractFile indexFile : indexFiles) { @@ -351,7 +338,6 @@ public class ExtractIE extends Extract { temps = RAImageIngestModule.getRATempPath(currentCase, "IE") + File.separator + indexFileName; File datFile = new File(temps); if (controller.isCancelled()) { - datFile.delete(); break; } try { @@ -364,32 +350,40 @@ public class ExtractIE extends Extract { String filename = "pasco2Result." + indexFile.getId() + ".txt"; boolean bPascProcSuccess = executePasco(temps, filename); - pascoResults.add(filename); //At this point pasco2 proccessed the index files. //Now fetch the results, parse them and the delete the files. if (bPascProcSuccess) { - + parsePascoOutput(indexFile, filename); + foundHistory = true; + //Delete index.dat file since it was succcessfully by Pasco datFile.delete(); + } else { + logger.log(Level.WARNING, "pasco execution failed on: " + this.getName()); + this.addErrorMessage(this.getName() + ": Error processing Internet Explorer history."); } } + + if (foundHistory) { + services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_HISTORY)); + } } - //Simple wrapper to JavaSystemCaller.Exec() to execute pasco2 jar - // TODO: Hardcoded command args/path needs to be removed. Maybe set some constants and set env variables for classpath - // I'm not happy with this code. Can't stand making a system call, is not an acceptable solution but is a hack for now. - private boolean executePasco(String indexFilePath, String filename) { - if (pascoFound == false) { - return false; - } + /** + * Execute pasco on a single file that has been saved to disk. + * @param indexFilePath Path to local index.dat file to analyze + * @param outputFileName Name of file to save output to + * @return false on error + */ + private boolean executePasco(String indexFilePath, String outputFileName) { boolean success = true; Writer writer = null; try { - final String pascoOutFile = moduleTempResultsDir + File.separator + filename; - logger.log(Level.INFO, "Writing pasco results to: " + pascoOutFile); - writer = new FileWriter(pascoOutFile); + final String outputFileFullPath = moduleTempResultsDir + File.separator + outputFileName; + logger.log(Level.INFO, "Writing pasco results to: " + outputFileFullPath); + writer = new FileWriter(outputFileFullPath); execPasco = new ExecUtil(); execPasco.execute(writer, JAVA_PATH, "-cp", PASCO_LIB_PATH, @@ -412,143 +406,117 @@ public class ExtractIE extends Extract { } } } - return success; } - private void getHistory(List filenames) { - if (pascoFound == false) { + /** + * parse Pasco output and create artifacts + * @param origFile Original index.dat file that was analyzed to get this output + * @param pascoOutputFileName name of pasco output file + */ + private void parsePascoOutput(AbstractFile origFile, String pascoOutputFileName) { + + String fnAbs = moduleTempResultsDir + File.separator + pascoOutputFileName; + + File file = new File(fnAbs); + if (file.exists() == false) { + this.addErrorMessage(this.getName() + ": Pasco output not found: " + file.getName()); + logger.log(Level.WARNING, "Pasco Output not found: " + file.getPath()); return; } - // First thing we want to do is check to make sure the results directory - // is not empty. - File rFile = new File(moduleTempResultsDir); - if (rFile.exists()) { - //Give me a list of pasco results in that directory - File[] pascoFiles = rFile.listFiles(); - - if (pascoFiles.length > 0) { - for (File file : pascoFiles) { - String fileName = file.getName(); - if (!filenames.contains(fileName)) { - logger.log(Level.INFO, "Found a temp Pasco result file not in the list: {0}", fileName); - continue; - } - long artObjId = Long.parseLong(fileName.substring(fileName.indexOf(".") + 1, fileName.lastIndexOf("."))); - - // Make sure the file the is not empty or the Scanner will - // throw a "No Line found" Exception - if (file != null && file.length() > 0) { - try { - Scanner fileScanner = new Scanner(new FileInputStream(file.toString())); - //Skip the first three lines - fileScanner.nextLine(); - fileScanner.nextLine(); - fileScanner.nextLine(); - - while (fileScanner.hasNext()) { - - String line = fileScanner.nextLine(); - - // lines at end of file - if ((line.startsWith("LEAK entries")) || - (line.startsWith("REDR entries")) || - (line.startsWith("URL entries")) || - (line.startsWith("ent entries")) || - (line.startsWith("unknown entries"))) { - continue; - } - - if (line.startsWith("URL")) { - String[] lineBuff = line.split("\\t"); - - if (lineBuff.length < 4) { - // @@@ Could log a message here - continue; - } - - String ddtime = lineBuff[2]; - String actime = lineBuff[3]; - Long ftime = (long) 0; - String user = ""; - String realurl = ""; - String domain = ""; - - /* We've seen two types of lines: - * URL http://XYZ.com .... - * URL Visited: Joe@http://XYZ.com .... - */ - if (lineBuff[1].contains("@")) { - String url[] = lineBuff[1].split("@", 2); - user = url[0]; - user = user.replace("Visited:", ""); - user = user.replace(":Host:", ""); - user = user.replaceAll("(:)(.*?)(:)", ""); - user = user.trim(); - realurl = url[1]; - realurl = realurl.replace("Visited:", ""); - realurl = realurl.replaceAll(":(.*?):", ""); - realurl = realurl.replace(":Host:", ""); - realurl = realurl.trim(); - } - else { - user = ""; - realurl = lineBuff[1].trim(); - } - - domain = Util.extractDomain(realurl); - - if (!ddtime.isEmpty()) { - ddtime = ddtime.replace("T", " "); - ddtime = ddtime.substring(ddtime.length() - 5); - } - - if (!actime.isEmpty()) { - try { - Long epochtime = dateFormatter.parse(actime).getTime(); - ftime = epochtime.longValue(); - ftime = ftime / 1000; - } catch (ParseException e) { - logger.log(Level.SEVERE, "Error parsing Pasco results.", e); - } - } - - try { - BlackboardArtifact bbart = tskCase.getContentById(artObjId).newArtifact(ARTIFACT_TYPE.TSK_WEB_HISTORY); - Collection bbattributes = new ArrayList(); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "RecentActivity", realurl)); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "RecentActivity", EscapeUtil.decodeURL(realurl))); - - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "RecentActivity", ftime)); - - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_REFERRER.getTypeID(), "RecentActivity", "")); - - // bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", ddtime)); - - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", "Internet Explorer")); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "RecentActivity", domain)); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_USER_NAME.getTypeID(), "RecentActivity", user)); - bbart.addAttributes(bbattributes); - } catch (TskCoreException ex) { - logger.log(Level.SEVERE, "Error writing Internet Explorer web history artifact to the blackboard.", ex); - } - } - else { - // @@@ Log that we didn't parse this - } - - - } - } catch (FileNotFoundException ex) { - logger.log(Level.WARNING, "Unable to find the Pasco file at " + file.getPath(), ex); - } - } - } - } + // Make sure the file the is not empty or the Scanner will + // throw a "No Line found" Exception + if (file.length() == 0) { + return; } - services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_HISTORY)); + Scanner fileScanner; + try { + fileScanner = new Scanner(new FileInputStream(file.toString())); + } catch (FileNotFoundException ex) { + this.addErrorMessage(this.getName() + ": Error parsing IE history entry " + file.getName()); + logger.log(Level.WARNING, "Unable to find the Pasco file at " + file.getPath(), ex); + return; + } + + while (fileScanner.hasNext()) { + String line = fileScanner.nextLine(); + if (!line.startsWith("URL")) { + continue; + } + + String[] lineBuff = line.split("\\t"); + + if (lineBuff.length < 4) { + logger.log(Level.INFO, "Found unrecognized IE history format."); + continue; + } + + String ddtime = lineBuff[2]; + String actime = lineBuff[3]; + Long ftime = (long) 0; + String user = ""; + String realurl = ""; + String domain = ""; + + /* We've seen two types of lines: + * URL http://XYZ.com .... + * URL Visited: Joe@http://XYZ.com .... + */ + if (lineBuff[1].contains("@")) { + String url[] = lineBuff[1].split("@", 2); + user = url[0]; + user = user.replace("Visited:", ""); + user = user.replace(":Host:", ""); + user = user.replaceAll("(:)(.*?)(:)", ""); + user = user.trim(); + realurl = url[1]; + realurl = realurl.replace("Visited:", ""); + realurl = realurl.replaceAll(":(.*?):", ""); + realurl = realurl.replace(":Host:", ""); + realurl = realurl.trim(); + } else { + user = ""; + realurl = lineBuff[1].trim(); + } + + domain = Util.extractDomain(realurl); + + if (!ddtime.isEmpty()) { + ddtime = ddtime.replace("T", " "); + ddtime = ddtime.substring(ddtime.length() - 5); + } + + if (!actime.isEmpty()) { + try { + Long epochtime = dateFormatter.parse(actime).getTime(); + ftime = epochtime.longValue(); + ftime = ftime / 1000; + } catch (ParseException e) { + this.addErrorMessage(this.getName() + ": Error parsing Internet Explorer History entry."); + logger.log(Level.SEVERE, "Error parsing Pasco results.", e); + } + } + + try { + BlackboardArtifact bbart = origFile.newArtifact(ARTIFACT_TYPE.TSK_WEB_HISTORY); + Collection bbattributes = new ArrayList<>(); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "RecentActivity", realurl)); + //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "RecentActivity", EscapeUtil.decodeURL(realurl))); + + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "RecentActivity", ftime)); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_REFERRER.getTypeID(), "RecentActivity", "")); + // @@@ NOte that other browser modules are adding TITLE in hre for the title + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", "Internet Explorer")); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "RecentActivity", domain)); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_USER_NAME.getTypeID(), "RecentActivity", user)); + bbart.addAttributes(bbattributes); + } catch (TskCoreException ex) { + logger.log(Level.SEVERE, "Error writing Internet Explorer web history artifact to the blackboard.", ex); + } + } + fileScanner.close(); } @Override @@ -558,23 +526,6 @@ public class ExtractIE extends Extract { @Override public void complete() { - // Delete all the results when complete - /*for (String file : pascoResults) { - String filePath = moduleTempResultsDir + File.separator + file; - try { - File f = new File(filePath); - if (f.exists() && f.canWrite()) { - f.delete(); - } else { - logger.log(Level.WARNING, "Unable to delete file " + filePath); - } - } catch (SecurityException ex) { - logger.log(Level.WARNING, "Incorrect permission to delete file " + filePath, ex); - } - } - */ - pascoResults.clear(); - logger.info("Internet Explorer extract has completed."); } @Override @@ -597,4 +548,4 @@ public class ExtractIE extends Extract { public boolean hasBackgroundJobsRunning() { return false; } -} \ No newline at end of file +} diff --git a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/ExtractRegistry.java b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/ExtractRegistry.java index 5643a8e0c1..8b0c89ade8 100644 --- a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/ExtractRegistry.java +++ b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/ExtractRegistry.java @@ -32,7 +32,6 @@ import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.parsers.ParserConfigurationException; import org.openide.modules.InstalledFileLocator; -import org.sleuthkit.autopsy.casemodule.Case; import org.sleuthkit.autopsy.coreutils.ExecUtil; import org.sleuthkit.autopsy.coreutils.Logger; import org.sleuthkit.autopsy.coreutils.PlatformUtil; @@ -40,7 +39,6 @@ import org.sleuthkit.autopsy.datamodel.ContentUtils; import org.sleuthkit.autopsy.ingest.IngestDataSourceWorkerController; import org.sleuthkit.autopsy.ingest.IngestModuleDataSource; import org.sleuthkit.autopsy.ingest.IngestModuleInit; -import org.sleuthkit.autopsy.ingest.IngestServices; import org.sleuthkit.autopsy.ingest.PipelineContext; import org.sleuthkit.autopsy.recentactivity.ExtractUSB.USBInfo; import org.sleuthkit.datamodel.*; @@ -54,17 +52,18 @@ import org.xml.sax.InputSource; import org.xml.sax.SAXException; /** - * Extracting windows registry data using regripper + * Extract windows registry data using regripper. + * Runs two versions of regripper. One is the generally available set of plug-ins + * and the second is a set that were customized for Autopsy to produce a more structured + * output of XML so that we can parse and turn into blackboard artifacts. */ public class ExtractRegistry extends Extract { public Logger logger = Logger.getLogger(this.getClass().getName()); private String RR_PATH; private String RR_FULL_PATH; - boolean rrFound = false; - boolean rrFullFound = false; - private int sysid; - private IngestServices services; + boolean rrFound = false; // true if we found the Autopsy-specific version of regripper + boolean rrFullFound = false; // true if we found the full version of regripper final public static String MODULE_VERSION = "1.0"; private ExecUtil execRR; @@ -111,40 +110,58 @@ public class ExtractRegistry extends Extract { return MODULE_VERSION; } + /** - * Identifies registry files in the database by name, runs regripper on them, and parses the output. + * Search for the registry hives on the system. + * @param dataSource Data source to search for hives in. + * @return List of registry hives + */ + private List findRegistryFiles(Content dataSource) { + List allRegistryFiles = new ArrayList<>(); + org.sleuthkit.autopsy.casemodule.services.FileManager fileManager = currentCase.getServices().getFileManager(); + + // find the user-specific ntuser-dat files + try { + allRegistryFiles.addAll(fileManager.findFiles(dataSource, "ntuser.dat")); + } + catch (TskCoreException ex) { + logger.log(Level.WARNING, "Error fetching 'ntuser.dat' file."); + } + + // find the system hives' + String[] regFileNames = new String[] {"system", "software", "security", "sam"}; + for (String regFileName : regFileNames) { + try { + allRegistryFiles.addAll(fileManager.findFiles(dataSource, regFileName, "/system32/config")); + } + catch (TskCoreException ex) { + String msg = "Error fetching registry file: " + regFileName; + logger.log(Level.WARNING, msg); + this.addErrorMessage(this.getName() + ": " + msg); + } + } + return allRegistryFiles; + } + + /** + * Identifies registry files in the database by mtimeItem, runs regripper on them, and parses the output. * * @param dataSource * @param controller */ - private void getRegistryFiles(Content dataSource, IngestDataSourceWorkerController controller) { - org.sleuthkit.autopsy.casemodule.services.FileManager fileManager = currentCase.getServices().getFileManager(); - List allRegistryFiles = new ArrayList<>(); - try { - allRegistryFiles.addAll(fileManager.findFiles(dataSource, "ntuser.dat")); - } catch (TskCoreException ex) { - logger.log(Level.WARNING, "Error fetching 'ntuser.dat' file."); - } - - // try to find each of the listed registry files whose parent directory - // is like '/system32/config' - String[] regFileNames = new String[] {"system", "software", "security", "sam", "default"}; - for (String regFileName : regFileNames) { - try { - allRegistryFiles.addAll(fileManager.findFiles(dataSource, regFileName, "/system32/config")); - } catch (TskCoreException ex) { - logger.log(Level.WARNING, "Error fetching registry file: " + regFileName); - } - } - ExtractUSB extrctr = new ExtractUSB(); + private void analyzeRegistryFiles(Content dataSource, IngestDataSourceWorkerController controller) { + List allRegistryFiles = findRegistryFiles(dataSource); + + // open the log file FileWriter logFile = null; try { logFile = new FileWriter(RAImageIngestModule.getRAOutputPath(currentCase, "reg") + File.separator + "regripper-info.txt"); } catch (IOException ex) { java.util.logging.Logger.getLogger(ExtractRegistry.class.getName()).log(Level.SEVERE, null, ex); - logFile = null; } + ExtractUSB extrctr = new ExtractUSB(); + int j = 0; for (AbstractFile regFile : allRegistryFiles) { String regFileName = regFile.getName(); @@ -155,59 +172,80 @@ public class ExtractRegistry extends Extract { ContentUtils.writeToFile(regFile, regFileNameLocalFile); } catch (IOException ex) { logger.log(Level.SEVERE, "Error writing the temp registry file. {0}", ex); + this.addErrorMessage(this.getName() + ": Error analyzing registry file " + regFileName); continue; } + + if (controller.isCancelled()) { + break; + } try { if (logFile != null) { logFile.write(Integer.toString(j-1) + "\t" + regFile.getUniquePath() + "\n"); } - } catch (TskCoreException ex) { - java.util.logging.Logger.getLogger(ExtractRegistry.class.getName()).log(Level.SEVERE, null, ex); - } - catch (IOException ex) { + } + catch (TskCoreException | IOException ex) { java.util.logging.Logger.getLogger(ExtractRegistry.class.getName()).log(Level.SEVERE, null, ex); } logger.log(Level.INFO, moduleName + "- Now getting registry information from " + regFileNameLocal); RegOutputFiles regOutputFiles = executeRegRip(regFileNameLocal, outputPathBase); - if (parseReg(regOutputFiles.autopsyPlugins, regFile.getId(), extrctr) == false) { - continue; + + if (controller.isCancelled()) { + break; + } + + // parse the autopsy-specific output + if (regOutputFiles.autopsyPlugins.isEmpty() == false) { + if (parseAutopsyPluginOutput(regOutputFiles.autopsyPlugins, regFile.getId(), extrctr) == false) { + this.addErrorMessage(this.getName() + ": Failed parsing registry file results " + regFileName); + } } - try { - BlackboardArtifact art = regFile.newArtifact(ARTIFACT_TYPE.TSK_TOOL_OUTPUT.getTypeID()); - BlackboardAttribute att = new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", "RegRipper"); - art.addAttribute(att); - - FileReader fread = new FileReader(regOutputFiles.fullPlugins); - BufferedReader input = new BufferedReader(fread); - - StringBuilder sb = new StringBuilder(); - while (true) { - + // create a RAW_TOOL artifact for the full output + if (regOutputFiles.fullPlugins.isEmpty() == false) { + try { + BlackboardArtifact art = regFile.newArtifact(ARTIFACT_TYPE.TSK_TOOL_OUTPUT.getTypeID()); + BlackboardAttribute att = new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", "RegRipper"); + art.addAttribute(att); + + FileReader fread = new FileReader(regOutputFiles.fullPlugins); + BufferedReader input = new BufferedReader(fread); + + StringBuilder sb = new StringBuilder(); try { - String s = input.readLine(); - if (s == null) { - break; + while (true) { + String s = input.readLine(); + if (s == null) { + break; + } + sb.append(s).append("\n"); } - sb.append(s).append("\n"); } catch (IOException ex) { java.util.logging.Logger.getLogger(ExtractRegistry.class.getName()).log(Level.SEVERE, null, ex); - break; + } finally { + try { + input.close(); + } catch (IOException ex) { + logger.log(Level.WARNING, "Failed to close reader.", ex); + } } + att = new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_TEXT.getTypeID(), "RecentActivity", sb.toString()); + art.addAttribute(att); + } catch (FileNotFoundException ex) { + this.addErrorMessage(this.getName() + ": Error reading registry file - " + regOutputFiles.fullPlugins); + java.util.logging.Logger.getLogger(ExtractRegistry.class.getName()).log(Level.SEVERE, null, ex); + } catch (TskCoreException ex) { + // TODO - add error message here? + java.util.logging.Logger.getLogger(ExtractRegistry.class.getName()).log(Level.SEVERE, null, ex); } - - att = new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_TEXT.getTypeID(), "RecentActivity", sb.toString()); - art.addAttribute(att); - } catch (FileNotFoundException ex) { - java.util.logging.Logger.getLogger(ExtractRegistry.class.getName()).log(Level.SEVERE, null, ex); - } catch (TskCoreException ex) { - java.util.logging.Logger.getLogger(ExtractRegistry.class.getName()).log(Level.SEVERE, null, ex); - } - + } + + // delete the hive regFileNameLocalFile.delete(); } + try { if (logFile != null) { logFile.close(); @@ -222,52 +260,56 @@ public class ExtractRegistry extends Extract { public String fullPlugins = ""; } - // TODO: Hardcoded command args/path needs to be removed. Maybe set some constants and set env variables for classpath - // I'm not happy with this code. Can't stand making a system call, is not an acceptable solution but is a hack for now. /** * Execute regripper on the given registry. * @param regFilePath Path to local copy of registry - * @param outFilePathBase Path to location to save output file to. Base name that will be extended on + * @param outFilePathBase Path to location to save output file to. Base mtimeItem that will be extended on */ private RegOutputFiles executeRegRip(String regFilePath, String outFilePathBase) { - Writer writer = null; - - String type = ""; - String fullType = ""; + String autopsyType = ""; // Type argument for rr for autopsy-specific modules + String fullType = ""; // Type argument for rr for full set of modules + RegOutputFiles regOutputFiles = new RegOutputFiles(); - + if (regFilePath.toLowerCase().contains("system")) { - type = "autopsysystem"; + autopsyType = "autopsysystem"; fullType = "system"; - } else if (regFilePath.toLowerCase().contains("software")) { - type = "autopsysoftware"; + } + else if (regFilePath.toLowerCase().contains("software")) { + autopsyType = "autopsysoftware"; fullType = "software"; - } else if (regFilePath.toLowerCase().contains("ntuser")) { - type = "autopsy"; + } + else if (regFilePath.toLowerCase().contains("ntuser")) { + autopsyType = "autopsyntuser"; fullType = "ntuser"; - } else if (regFilePath.toLowerCase().contains("default")) { - //type = "1default"; - } else if (regFilePath.toLowerCase().contains("sam")) { + } + else if (regFilePath.toLowerCase().contains("sam")) { fullType = "sam"; - } else if (regFilePath.toLowerCase().contains("security")) { + } + else if (regFilePath.toLowerCase().contains("security")) { fullType = "security"; - } else { - // @@@ Seems like we should error out or something... - type = "1default"; + } + else { + return regOutputFiles; } - - if ((type.equals("") == false) && (rrFound)) { + + // run the autopsy-specific set of modules + if (!autopsyType.isEmpty() && rrFound) { + // TODO - add error messages + Writer writer = null; try { regOutputFiles.autopsyPlugins = outFilePathBase + "-autopsy.txt"; logger.log(Level.INFO, "Writing RegRipper results to: " + regOutputFiles.autopsyPlugins); writer = new FileWriter(regOutputFiles.autopsyPlugins); execRR = new ExecUtil(); execRR.execute(writer, RR_PATH, - "-r", regFilePath, "-f", type); + "-r", regFilePath, "-f", autopsyType); } catch (IOException ex) { logger.log(Level.SEVERE, "Unable to RegRipper and process parse some registry files.", ex); + this.addErrorMessage(this.getName() + ": Failed to analyze registry file"); } catch (InterruptedException ex) { logger.log(Level.SEVERE, "RegRipper has been interrupted, failed to parse registry.", ex); + this.addErrorMessage(this.getName() + ": Failed to analyze registry file"); } finally { if (writer != null) { try { @@ -278,11 +320,10 @@ public class ExtractRegistry extends Extract { } } } - else { - logger.log(Level.INFO, "Not running Autopsy-only modules on hive"); - } - if ((fullType.equals("") == false) && (rrFullFound)) { + // run the full set of rr modules + if (!fullType.isEmpty() && rrFullFound) { + Writer writer = null; try { regOutputFiles.fullPlugins = outFilePathBase + "-full.txt"; logger.log(Level.INFO, "Writing Full RegRipper results to: " + regOutputFiles.fullPlugins); @@ -292,8 +333,10 @@ public class ExtractRegistry extends Extract { "-r", regFilePath, "-f", fullType); } catch (IOException ex) { logger.log(Level.SEVERE, "Unable to run full RegRipper and process parse some registry files.", ex); + this.addErrorMessage(this.getName() + ": Failed to analyze registry file"); } catch (InterruptedException ex) { logger.log(Level.SEVERE, "RegRipper full has been interrupted, failed to parse registry.", ex); + this.addErrorMessage(this.getName() + ": Failed to analyze registry file"); } finally { if (writer != null) { try { @@ -304,28 +347,21 @@ public class ExtractRegistry extends Extract { } } } - else { - logger.log(Level.INFO, "Not running original RR modules on hive"); - } + return regOutputFiles; } // @@@ VERIFY that we are doing the right thing when we parse multiple NTUSER.DAT - - private boolean parseReg(String regRecord, long orgId, ExtractUSB extrctr) { + private boolean parseAutopsyPluginOutput(String regRecord, long orgId, ExtractUSB extrctr) { FileInputStream fstream = null; try { - Case currentCase = Case.getCurrentCase(); // get the most updated case SleuthkitCase tempDb = currentCase.getSleuthkitCase(); // Read the file in and create a Document and elements File regfile = new File(regRecord); fstream = new FileInputStream(regfile); - //InputStreamReader fstreamReader = new InputStreamReader(fstream, "UTF-8"); - //BufferedReader input = new BufferedReader(fstreamReader); - //logger.log(Level.INFO, "using encoding " + fstreamReader.getEncoding()); + String regString = new Scanner(fstream, "UTF-8").useDelimiter("\\Z").next(); - //regfile.delete(); String startdoc = ""; String result = regString.replaceAll("----------------------------------------", ""); result = result.replaceAll("\\n", ""); @@ -343,18 +379,19 @@ public class ExtractRegistry extends Extract { int len = children.getLength(); for (int i = 0; i < len; i++) { Element tempnode = (Element) children.item(i); - String context = tempnode.getNodeName(); + + String dataType = tempnode.getNodeName(); - NodeList timenodes = tempnode.getElementsByTagName("time"); - Long time = null; + NodeList timenodes = tempnode.getElementsByTagName("mtime"); + Long mtime = null; if (timenodes.getLength() > 0) { Element timenode = (Element) timenodes.item(0); String etime = timenode.getTextContent(); try { Long epochtime = new SimpleDateFormat("EEE MMM d HH:mm:ss yyyy").parse(etime).getTime(); - time = epochtime.longValue(); - String Tempdate = time.toString(); - time = Long.valueOf(Tempdate) / 1000; + mtime = epochtime.longValue(); + String Tempdate = mtime.toString(); + mtime = Long.valueOf(Tempdate) / 1000; } catch (ParseException ex) { logger.log(Level.WARNING, "Failed to parse epoch time when parsing the registry."); } @@ -369,77 +406,67 @@ public class ExtractRegistry extends Extract { Element artroot = (Element) artroots.item(0); NodeList myartlist = artroot.getChildNodes(); String winver = ""; - String installdate = ""; for (int j = 0; j < myartlist.getLength(); j++) { Node artchild = myartlist.item(j); // If it has attributes, then it is an Element (based off API) if (artchild.hasAttributes()) { Element artnode = (Element) artchild; - String name = artnode.getAttribute("name"); + String value = artnode.getTextContent().trim(); Collection bbattributes = new ArrayList(); - if ("recentdocs".equals(context)) { + if ("recentdocs".equals(dataType)) { // BlackboardArtifact bbart = tempDb.getContentById(orgId).newArtifact(ARTIFACT_TYPE.TSK_RECENT_OBJECT); - // bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_LAST_ACCESSED.getTypeID(), "RecentActivity", context, time)); - // bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "RecentActivity", context, name)); - // bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_VALUE.getTypeID(), "RecentActivity", context, value)); + // bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_LAST_ACCESSED.getTypeID(), "RecentActivity", dataType, mtime)); + // bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "RecentActivity", dataType, mtimeItem)); + // bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_VALUE.getTypeID(), "RecentActivity", dataType, value)); // bbart.addAttributes(bbattributes); - } else if ("usb".equals(context)) { - try { - Long utime = null; - utime = Long.parseLong(name); - String Tempdate = utime.toString(); - utime = Long.valueOf(Tempdate); + // @@@ BC: Why are we ignoring this... + } + else if ("usb".equals(dataType)) { + try { + Long usbMtime = Long.parseLong(artnode.getAttribute("mtime")); + usbMtime = Long.valueOf(usbMtime.toString()); BlackboardArtifact bbart = tempDb.getContentById(orgId).newArtifact(ARTIFACT_TYPE.TSK_DEVICE_ATTACHED); - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", context, utime)); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", utime)); - String dev = artnode.getAttribute("dev"); - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DEVICE_MODEL.getTypeID(), "RecentActivity", context, dev)); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DEVICE_ID.getTypeID(), "RecentActivity", context, value)); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DEVICE_MODEL.getTypeID(), "RecentActivity", dev)); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DEVICE_ID.getTypeID(), "RecentActivity", value)); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", usbMtime)); + String dev = artnode.getAttribute("dev"); + String model = dev; if (dev.toLowerCase().contains("vid")) { USBInfo info = extrctr.get(dev); if(info.getVendor()!=null) bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DEVICE_MAKE.getTypeID(), "RecentActivity", info.getVendor())); if(info.getProduct() != null) - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DEVICE_MODEL.getTypeID(), "RecentActivity", info.getProduct())); + model = info.getProduct(); } + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DEVICE_MODEL.getTypeID(), "RecentActivity", model)); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DEVICE_ID.getTypeID(), "RecentActivity", value)); bbart.addAttributes(bbattributes); } catch (TskCoreException ex) { logger.log(Level.SEVERE, "Error adding device attached artifact to blackboard."); } - } else if ("uninstall".equals(context)) { - Long ftime = null; + } + else if ("uninstall".equals(dataType)) { + Long itemMtime = null; try { - Long epochtime = new SimpleDateFormat("EEE MMM d HH:mm:ss yyyy").parse(name).getTime(); - ftime = epochtime.longValue(); - ftime = ftime / 1000; + Long epochtime = new SimpleDateFormat("EEE MMM d HH:mm:ss yyyy").parse(artnode.getAttribute("mtime")).getTime(); + itemMtime = epochtime.longValue(); + itemMtime = itemMtime / 1000; } catch (ParseException e) { logger.log(Level.WARNING, "Failed to parse epoch time for installed program artifact."); } - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_LAST_ACCESSED.getTypeID(), "RecentActivity", context, time)); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", context, value)); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", context, ftime)); - try { - if (time != null) { - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "RecentActivity", time)); - } bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", value)); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", ftime)); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", itemMtime)); BlackboardArtifact bbart = tempDb.getContentById(orgId).newArtifact(ARTIFACT_TYPE.TSK_INSTALLED_PROG); bbart.addAttributes(bbattributes); } catch (TskCoreException ex) { logger.log(Level.SEVERE, "Error adding installed program artifact to blackboard."); } - } else if ("WinVersion".equals(context)) { + } + else if ("WinVersion".equals(dataType)) { + String name = artnode.getAttribute("name"); if (name.contains("ProductName")) { winver = value; @@ -448,7 +475,6 @@ public class ExtractRegistry extends Extract { winver = winver + " " + value; } if (name.contains("InstallDate")) { - installdate = value; Long installtime = null; try { Long epochtime = new SimpleDateFormat("EEE MMM d HH:mm:ss yyyy").parse(value).getTime(); @@ -459,9 +485,6 @@ public class ExtractRegistry extends Extract { logger.log(Level.SEVERE, "RegRipper::Conversion on DateTime -> ", e); } try { - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", context, winver)); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", context, installtime)); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", winver)); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", installtime)); BlackboardArtifact bbart = tempDb.getContentById(orgId).newArtifact(ARTIFACT_TYPE.TSK_INSTALLED_PROG); @@ -470,16 +493,15 @@ public class ExtractRegistry extends Extract { logger.log(Level.SEVERE, "Error adding installed program artifact to blackboard."); } } - } else if ("office".equals(context)) { + } + else if ("office".equals(dataType)) { + String name = artnode.getAttribute("name"); + try { BlackboardArtifact bbart = tempDb.getContentById(orgId).newArtifact(ARTIFACT_TYPE.TSK_RECENT_OBJECT); - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_LAST_ACCESSED.getTypeID(), "RecentActivity", context, time)); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "RecentActivity", context, name)); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_VALUE.getTypeID(), "RecentActivity", context, value)); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", context, artnode.getName())); - if (time != null) { - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "RecentActivity", time)); + // @@@ BC: Consider removing this after some more testing. It looks like an Mtime associated with the root key and not the individual item + if (mtime != null) { + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "RecentActivity", mtime)); } bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "RecentActivity", name)); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_VALUE.getTypeID(), "RecentActivity", value)); @@ -488,13 +510,8 @@ public class ExtractRegistry extends Extract { } catch (TskCoreException ex) { logger.log(Level.SEVERE, "Error adding recent object artifact to blackboard."); } - - } else { - //BlackboardArtifact bbart = tempDb.getContentById(orgId).newArtifact(sysid); - //bbart.addAttributes(bbattributes); } } - } } return true; @@ -518,19 +535,16 @@ public class ExtractRegistry extends Extract { } @Override - public void process(PipelineContextpipelineContext, Content dataSource, IngestDataSourceWorkerController controller) { - this.getRegistryFiles(dataSource, controller); + analyzeRegistryFiles(dataSource, controller); } @Override public void init(IngestModuleInit initContext) { - services = IngestServices.getDefault(); } @Override public void complete() { - logger.info("Registry Extract has completed."); } @Override @@ -539,7 +553,6 @@ public class ExtractRegistry extends Extract { execRR.stop(); execRR = null; } - } @Override diff --git a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/Firefox.java b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/Firefox.java index 4322a2b926..4abfcf0bec 100644 --- a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/Firefox.java +++ b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/Firefox.java @@ -2,7 +2,7 @@ * * Autopsy Forensic Browser * - * Copyright 2012 Basis Technology Corp. + * Copyright 2012-2013 Basis Technology Corp. * * Copyright 2012 42six Solutions. * Contact: aebadirad 42six com @@ -32,7 +32,6 @@ import java.util.HashMap; import java.util.List; import java.util.logging.Level; import org.sleuthkit.autopsy.casemodule.services.FileManager; -import org.sleuthkit.autopsy.coreutils.EscapeUtil; import org.sleuthkit.autopsy.datamodel.ContentUtils; import org.sleuthkit.autopsy.ingest.PipelineContext; import org.sleuthkit.autopsy.ingest.IngestDataSourceWorkerController; @@ -53,11 +52,13 @@ import org.sleuthkit.datamodel.TskCoreException; */ public class Firefox extends Extract { - private static final String ffquery = "SELECT moz_historyvisits.id,url,title,visit_count,(visit_date/1000000) as visit_date,from_visit,(SELECT url FROM moz_places WHERE id=moz_historyvisits.from_visit) as ref FROM moz_places, moz_historyvisits WHERE moz_places.id = moz_historyvisits.place_id AND hidden = 0"; - private static final String ffcookiequery = "SELECT name,value,host,expiry,(lastAccessed/1000000) as lastAccessed,(creationTime/1000000) as creationTime FROM moz_cookies"; - private static final String ff3cookiequery = "SELECT name,value,host,expiry,(lastAccessed/1000000) as lastAccessed FROM moz_cookies"; - private static final String ffbookmarkquery = "SELECT fk, moz_bookmarks.title, url FROM moz_bookmarks INNER JOIN moz_places ON moz_bookmarks.fk=moz_places.id"; - private static final String ffdownloadquery = "select target, source,(startTime/1000000) as startTime, maxBytes from moz_downloads"; + private static final String historyQuery = "SELECT moz_historyvisits.id,url,title,visit_count,(visit_date/1000000) as visit_date,from_visit,(SELECT url FROM moz_places WHERE id=moz_historyvisits.from_visit) as ref FROM moz_places, moz_historyvisits WHERE moz_places.id = moz_historyvisits.place_id AND hidden = 0"; + private static final String cookieQuery = "SELECT name,value,host,expiry,(lastAccessed/1000000) as lastAccessed,(creationTime/1000000) as creationTime FROM moz_cookies"; + private static final String cookieQueryV3 = "SELECT name,value,host,expiry,(lastAccessed/1000000) as lastAccessed FROM moz_cookies"; + private static final String bookmarkQuery = "SELECT fk, moz_bookmarks.title, url, (moz_bookmarks.dateAdded/1000000) as dateAdded FROM moz_bookmarks INNER JOIN moz_places ON moz_bookmarks.fk=moz_places.id"; + private static final String downloadQuery = "SELECT target, source,(startTime/1000000) as startTime, maxBytes FROM moz_downloads"; + private static final String downloadQueryVersion24 = "SELECT url, content as target, (lastModified/1000000) as lastModified FROM moz_places, moz_annos WHERE moz_places.id = moz_annos.place_id AND moz_annos.anno_attribute_id = 3"; + public int FireFoxCount = 0; final public static String MODULE_VERSION = "1.0"; private IngestServices services; @@ -73,7 +74,8 @@ public class Firefox extends Extract { } @Override - public void process(PipelineContextpipelineContext, Content dataSource, IngestDataSourceWorkerController controller) { + public void process(PipelineContext pipelineContext, Content dataSource, IngestDataSourceWorkerController controller) { + dataFound = false; this.getHistory(dataSource, controller); this.getBookmark(dataSource, controller); this.getDownload(dataSource, controller); @@ -90,18 +92,28 @@ public class Firefox extends Extract { try { historyFiles = fileManager.findFiles(dataSource, "%places.sqlite%", "Firefox"); } catch (TskCoreException ex) { - logger.log(Level.WARNING, "Error fetching internet history files for Firefox."); + String msg = "Error fetching internet history files for Firefox."; + logger.log(Level.WARNING, msg); + this.addErrorMessage(this.getName() + ": " + msg); + return; } - if (historyFiles == null) { + if (historyFiles.isEmpty()) { + String msg = "No FireFox history files found."; + logger.log(Level.INFO, msg); return; } + dataFound = true; + int j = 0; for (AbstractFile historyFile : historyFiles) { + if (historyFile.getSize() == 0) { + continue; + } + String fileName = historyFile.getName(); String temps = RAImageIngestModule.getRATempPath(currentCase, "firefox") + File.separator + fileName + j + ".db"; - int errors = 0; try { ContentUtils.writeToFile(historyFile, new File(temps)); } catch (IOException ex) { @@ -114,25 +126,20 @@ public class Firefox extends Extract { dbFile.delete(); break; } - List> tempList = this.dbConnect(temps, ffquery); + List> tempList = this.dbConnect(temps, historyQuery); logger.log(Level.INFO, moduleName + "- Now getting history from " + temps + " with " + tempList.size() + "artifacts identified."); for (HashMap result : tempList) { Collection bbattributes = new ArrayList(); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "RecentActivity", ((result.get("url").toString() != null) ? result.get("url").toString() : ""))); //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "RecentActivity", ((result.get("url").toString() != null) ? EscapeUtil.decodeURL(result.get("url").toString()) : ""))); - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_LAST_ACCESSED.getTypeID(), "RecentActivity", "Last Visited", (Long.valueOf(result.get("visit_date").toString())))); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "RecentActivity", (Long.valueOf(result.get("visit_date").toString())))); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_REFERRER.getTypeID(), "RecentActivity", ((result.get("ref").toString() != null) ? result.get("ref").toString() : ""))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "RecentActivity", ((result.get("title").toString() != null) ? result.get("title").toString() : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_TITLE.getTypeID(), "RecentActivity", ((result.get("title").toString() != null) ? result.get("title").toString() : ""))); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", "FireFox")); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "RecentActivity", (Util.extractDomain((result.get("url").toString() != null) ? result.get("url").toString() : "")))); this.addArtifact(ARTIFACT_TYPE.TSK_WEB_HISTORY, historyFile, bbattributes); } - if (errors > 0) { - this.addErrorMessage(this.getName() + ": Error parsing " + errors + " Firefox web history artifacts."); - } ++j; dbFile.delete(); } @@ -140,6 +147,11 @@ public class Firefox extends Extract { services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_HISTORY)); } + /** + * Queries for bookmark files and adds artifacts + * @param dataSource + * @param controller + */ private void getBookmark(Content dataSource, IngestDataSourceWorkerController controller) { FileManager fileManager = currentCase.getServices().getFileManager(); @@ -147,18 +159,26 @@ public class Firefox extends Extract { try { bookmarkFiles = fileManager.findFiles(dataSource, "places.sqlite", "Firefox"); } catch (TskCoreException ex) { - logger.log(Level.WARNING, "Error fetching bookmark files for Firefox."); - } - - if (bookmarkFiles == null) { + String msg = "Error fetching bookmark files for Firefox."; + logger.log(Level.WARNING, msg); + this.addErrorMessage(this.getName() + ": " + msg); return; } + if (bookmarkFiles.isEmpty()) { + logger.log(Level.INFO, "Didn't find any firefox bookmark files."); + return; + } + + dataFound = true; + int j = 0; for (AbstractFile bookmarkFile : bookmarkFiles) { + if (bookmarkFile.getSize() == 0) { + continue; + } String fileName = bookmarkFile.getName(); String temps = RAImageIngestModule.getRATempPath(currentCase, "firefox") + File.separator + fileName + j + ".db"; - int errors = 0; try { ContentUtils.writeToFile(bookmarkFile, new File(temps)); } catch (IOException ex) { @@ -171,22 +191,21 @@ public class Firefox extends Extract { dbFile.delete(); break; } - List> tempList = this.dbConnect(temps, ffbookmarkquery); + List> tempList = this.dbConnect(temps, bookmarkQuery); logger.log(Level.INFO, moduleName + "- Now getting bookmarks from " + temps + " with " + tempList.size() + "artifacts identified."); for (HashMap result : tempList) { Collection bbattributes = new ArrayList(); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "RecentActivity", ((result.get("url").toString() != null) ? result.get("url").toString() : ""))); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "RecentActivity", ((result.get("url").toString() != null) ? EscapeUtil.decodeURL(result.get("url").toString()) : ""))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "RecentActivity", ((result.get("title").toString() != null) ? result.get("title").toString() : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_TITLE.getTypeID(), "RecentActivity", ((result.get("title").toString() != null) ? result.get("title").toString() : ""))); + if (Long.valueOf(result.get("dateAdded").toString()) > 0) { + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_CREATED.getTypeID(), "RecentActivity", (Long.valueOf(result.get("dateAdded").toString())))); + } bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", "FireFox")); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "RecentActivity", (Util.extractDomain((result.get("url").toString() != null) ? result.get("url").toString() : "")))); this.addArtifact(ARTIFACT_TYPE.TSK_WEB_BOOKMARK, bookmarkFile, bbattributes); } - if (errors > 0) { - this.addErrorMessage(this.getName() + ": Error parsing " + errors + " Firefox web history artifacts."); - } ++j; dbFile.delete(); } @@ -194,27 +213,36 @@ public class Firefox extends Extract { services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_BOOKMARK)); } - //COOKIES section - // This gets the cookie info + /** + * Queries for cookies file and adds artifacts + * @param dataSource + * @param controller + */ private void getCookie(Content dataSource, IngestDataSourceWorkerController controller) { - FileManager fileManager = currentCase.getServices().getFileManager(); List cookiesFiles = null; try { cookiesFiles = fileManager.findFiles(dataSource, "cookies.sqlite", "Firefox"); } catch (TskCoreException ex) { - logger.log(Level.WARNING, "Error fetching cookies files for Firefox."); - } - - if (cookiesFiles == null) { + String msg = "Error fetching cookies files for Firefox."; + logger.log(Level.WARNING, msg); + this.addErrorMessage(this.getName() + ": " + msg); return; } + if (cookiesFiles.isEmpty()) { + logger.log(Level.INFO, "Didn't find any Firefox cookie files."); + return; + } + + dataFound = true; int j = 0; for (AbstractFile cookiesFile : cookiesFiles) { + if (cookiesFile.getSize() == 0) { + continue; + } String fileName = cookiesFile.getName(); String temps = RAImageIngestModule.getRATempPath(currentCase, "firefox") + File.separator + fileName + j + ".db"; - int errors = 0; try { ContentUtils.writeToFile(cookiesFile, new File(temps)); } catch (IOException ex) { @@ -230,9 +258,9 @@ public class Firefox extends Extract { boolean checkColumn = Util.checkColumn("creationTime", "moz_cookies", temps); String query = null; if (checkColumn) { - query = ffcookiequery; + query = cookieQuery; } else { - query = ff3cookiequery; + query = cookieQueryV3; } List> tempList = this.dbConnect(temps, query); @@ -241,28 +269,18 @@ public class Firefox extends Extract { Collection bbattributes = new ArrayList(); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "RecentActivity", ((result.get("host").toString() != null) ? result.get("host").toString() : ""))); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "RecentActivity", ((result.get("host").toString() != null) ? EscapeUtil.decodeURL(result.get("host").toString()) : ""))); - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "RecentActivity", "Title", ((result.get("name").toString() != null) ? result.get("name").toString() : ""))); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", "Last Visited", (Long.valueOf(result.get("lastAccessed").toString())))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "RecentActivity", ((result.get("name").toString() != null) ? result.get("name").toString() : ""))); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", (Long.valueOf(result.get("lastAccessed").toString())))); - if (checkColumn == true) { - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", "Created", (Long.valueOf(result.get("creationTime").toString())))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME.getTypeID(), "RecentActivity", (Long.valueOf(result.get("creationTime").toString())))); - } - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", "FireFox")); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "RecentActivity", ((result.get("host").toString() != null) ? result.get("host").toString() : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_NAME.getTypeID(), "RecentActivity", ((result.get("name").toString() != null) ? result.get("name").toString() : ""))); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_VALUE.getTypeID(), "RecentActivity", ((result.get("value").toString() != null) ? result.get("value").toString() : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", "FireFox")); + + if (checkColumn == true) { + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_CREATED.getTypeID(), "RecentActivity", (Long.valueOf(result.get("creationTime").toString())))); + } String domain = Util.extractDomain(result.get("host").toString()); domain = domain.replaceFirst("^\\.+(?!$)", ""); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "RecentActivity", domain)); this.addArtifact(ARTIFACT_TYPE.TSK_WEB_COOKIE, cookiesFile, bbattributes); - - } - if (errors > 0) { - this.addErrorMessage(this.getName() + ": Error parsing " + errors + " Firefox web history artifacts."); } ++j; dbFile.delete(); @@ -270,25 +288,49 @@ public class Firefox extends Extract { services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_COOKIE)); } - - //Downloads section - // This gets the downloads info + + /** + * Queries for downloads files and adds artifacts + * @param dataSource + * @param controller + */ private void getDownload(Content dataSource, IngestDataSourceWorkerController controller) { + getDownloadPreVersion24(dataSource, controller); + getDownloadVersion24(dataSource, controller); + } + /** + * Finds downloads artifacts from Firefox data from versions before 24.0. + * + * Downloads were stored in a separate downloads database. + * + * @param dataSource + * @param controller + */ + private void getDownloadPreVersion24(Content dataSource, IngestDataSourceWorkerController controller) { + FileManager fileManager = currentCase.getServices().getFileManager(); List downloadsFiles = null; try { downloadsFiles = fileManager.findFiles(dataSource, "downloads.sqlite", "Firefox"); } catch (TskCoreException ex) { - logger.log(Level.WARNING, "Error fetching 'downloads' files for Firefox."); - } - - if (downloadsFiles == null) { + String msg = "Error fetching 'downloads' files for Firefox."; + logger.log(Level.WARNING, msg); + this.addErrorMessage(this.getName() + ": " + msg); return; } - + + if (downloadsFiles.isEmpty()) { + logger.log(Level.INFO, "Didn't find any pre-version-24.0 Firefox download files."); + return; + } + + dataFound = true; int j = 0; for (AbstractFile downloadsFile : downloadsFiles) { + if (downloadsFile.getSize() == 0) { + continue; + } String fileName = downloadsFile.getName(); String temps = RAImageIngestModule.getRATempPath(currentCase, "firefox") + File.separator + fileName + j + ".db"; int errors = 0; @@ -305,26 +347,29 @@ public class Firefox extends Extract { break; } - List> tempList = this.dbConnect(temps, ffdownloadquery); + List> tempList = this.dbConnect(temps, downloadQuery); logger.log(Level.INFO, moduleName + "- Now getting downloads from " + temps + " with " + tempList.size() + "artifacts identified."); for (HashMap result : tempList) { + + Collection bbattributes = new ArrayList(); + + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "RecentActivity", ((result.get("source").toString() != null) ? result.get("source").toString() : ""))); + //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "RecentActivity", ((result.get("source").toString() != null) ? EscapeUtil.decodeURL(result.get("source").toString()) : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "RecentActivity", (Long.valueOf(result.get("startTime").toString())))); + try { - Collection bbattributes = new ArrayList(); String urldecodedtarget = URLDecoder.decode(result.get("source").toString().replaceAll("file:///", ""), "UTF-8"); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "RecentActivity", ((result.get("source").toString() != null) ? result.get("source").toString() : ""))); - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "RecentActivity", ((result.get("source").toString() != null) ? EscapeUtil.decodeURL(result.get("source").toString()) : ""))); - //TODO Revisit usage of deprecated constructor as per TSK-583 - //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_LAST_ACCESSED.getTypeID(), "RecentActivity", "Last Visited", (Long.valueOf(result.get("startTime").toString())))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "RecentActivity", (Long.valueOf(result.get("startTime").toString())))); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PATH_ID.getTypeID(), "RecentActivity", Util.findID(dataSource, urldecodedtarget))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PATH.getTypeID(), "RecentActivity", ((result.get("target").toString() != null) ? result.get("target").toString() : ""))); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", "FireFox")); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "RecentActivity", (Util.extractDomain((result.get("source").toString() != null) ? result.get("source").toString() : "")))); - this.addArtifact(ARTIFACT_TYPE.TSK_WEB_DOWNLOAD, downloadsFile, bbattributes); } catch (UnsupportedEncodingException ex) { logger.log(Level.SEVERE, "Error decoding Firefox download URL in " + temps, ex); errors++; } + + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PATH.getTypeID(), "RecentActivity", ((result.get("target").toString() != null) ? result.get("target").toString() : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", "FireFox")); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "RecentActivity", (Util.extractDomain((result.get("source").toString() != null) ? result.get("source").toString() : "")))); + this.addArtifact(ARTIFACT_TYPE.TSK_WEB_DOWNLOAD, downloadsFile, bbattributes); + } if (errors > 0) { this.addErrorMessage(this.getName() + ": Error parsing " + errors + " Firefox web history artifacts."); @@ -336,6 +381,82 @@ public class Firefox extends Extract { services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_DOWNLOAD)); } + + /** + * Gets download artifacts from Firefox data from version 24. + * + * Downloads are stored in the places database. + * + * @param dataSource + * @param controller + */ + private void getDownloadVersion24(Content dataSource, IngestDataSourceWorkerController controller) { + FileManager fileManager = currentCase.getServices().getFileManager(); + List downloadsFiles = null; + try { + downloadsFiles = fileManager.findFiles(dataSource, "places.sqlite", "Firefox"); + } catch (TskCoreException ex) { + String msg = "Error fetching 'downloads' files for Firefox."; + logger.log(Level.WARNING, msg); + this.addErrorMessage(this.getName() + ": " + msg); + return; + } + + if (downloadsFiles.isEmpty()) { + logger.log(Level.INFO, "Didn't find any version-24.0 Firefox download files."); + return; + } + + dataFound = true; + int j = 0; + for (AbstractFile downloadsFile : downloadsFiles) { + if (downloadsFile.getSize() == 0) { + continue; + } + String fileName = downloadsFile.getName(); + String temps = RAImageIngestModule.getRATempPath(currentCase, "firefox") + File.separator + fileName + "-downloads" + j + ".db"; + int errors = 0; + try { + ContentUtils.writeToFile(downloadsFile, new File(temps)); + } catch (IOException ex) { + logger.log(Level.SEVERE, "Error writing the sqlite db for firefox download artifacts.{0}", ex); + this.addErrorMessage(this.getName() + ": Error while trying to analyze file:" + fileName); + continue; + } + File dbFile = new File(temps); + if (controller.isCancelled()) { + dbFile.delete(); + break; + } + + List> tempList = this.dbConnect(temps, downloadQueryVersion24); + + logger.log(Level.INFO, moduleName + "- Now getting downloads from " + temps + " with " + tempList.size() + "artifacts identified."); + for (HashMap result : tempList) { + + Collection bbattributes = new ArrayList(); + + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL.getTypeID(), "RecentActivity", ((result.get("url").toString() != null) ? result.get("url").toString() : ""))); + //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_URL_DECODED.getTypeID(), "RecentActivity", ((result.get("source").toString() != null) ? EscapeUtil.decodeURL(result.get("source").toString()) : ""))); + //TODO Revisit usage of deprecated constructor as per TSK-583 + //bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_LAST_ACCESSED.getTypeID(), "RecentActivity", "Last Visited", (Long.valueOf(result.get("startTime").toString())))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_ACCESSED.getTypeID(), "RecentActivity", Long.valueOf(result.get("lastModified").toString()))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PATH.getTypeID(), "RecentActivity", ((result.get("target").toString() != null) ? result.get("target").toString().replaceAll("file:///", "") : ""))); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PROG_NAME.getTypeID(), "RecentActivity", "FireFox")); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DOMAIN.getTypeID(), "RecentActivity", (Util.extractDomain((result.get("url").toString() != null) ? result.get("url").toString() : "")))); + this.addArtifact(ARTIFACT_TYPE.TSK_WEB_DOWNLOAD, downloadsFile, bbattributes); + + } + if (errors > 0) { + this.addErrorMessage(this.getName() + ": Error parsing " + errors + " Firefox web download artifacts."); + } + j++; + dbFile.delete(); + break; + } + + services.fireModuleDataEvent(new ModuleDataEvent("Recent Activity", BlackboardArtifact.ARTIFACT_TYPE.TSK_WEB_DOWNLOAD)); + } @Override public void init(IngestModuleInit initContext) { @@ -344,12 +465,10 @@ public class Firefox extends Extract { @Override public void complete() { - logger.info("Firefox Extract has completed."); } @Override public void stop() { - logger.info("Attmped to stop Firefox extract, but operation is not supported; skipping..."); } @Override diff --git a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/RAImageIngestModule.java b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/RAImageIngestModule.java index 53e19ae32b..13d6827cc6 100644 --- a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/RAImageIngestModule.java +++ b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/RAImageIngestModule.java @@ -23,11 +23,12 @@ package org.sleuthkit.autopsy.recentactivity; import java.io.File; -import java.nio.file.Path; import java.util.ArrayList; +import java.util.List; import java.util.logging.Level; import org.sleuthkit.autopsy.casemodule.Case; import org.sleuthkit.autopsy.coreutils.Logger; +import org.sleuthkit.autopsy.coreutils.Version; import org.sleuthkit.autopsy.ingest.PipelineContext; import org.sleuthkit.autopsy.ingest.IngestDataSourceWorkerController; import org.sleuthkit.autopsy.ingest.IngestServices; @@ -49,7 +50,8 @@ public final class RAImageIngestModule extends IngestModuleDataSource { private static int messageId = 0; private StringBuilder subCompleted = new StringBuilder(); private ArrayList modules; - final public static String MODULE_VERSION = "1.0"; + private List browserModules; + final public static String MODULE_VERSION = Version.getVersion(); //public constructor is required //as multiple instances are created for processing multiple images simultenously @@ -77,7 +79,7 @@ public final class RAImageIngestModule extends IngestModuleDataSource { } catch (Exception ex) { logger.log(Level.SEVERE, "Exception occurred in " + module.getName(), ex); subCompleted.append(module.getName()).append(" failed - see log for details
"); - errors.add(module.getName() + "had errors -- see log"); + errors.add(module.getName() + " had errors -- see log"); } controller.progress(i + 1); errors.addAll(module.getErrorMessages()); @@ -106,6 +108,17 @@ public final class RAImageIngestModule extends IngestModuleDataSource { } final IngestMessage msg = IngestMessage.createMessage(++messageId, msgLevel, this, "Finished " + dataSource.getName()+ " - " + errorMsgSubject, errorMessage.toString()); services.postMessage(msg); + + StringBuilder historyMsg = new StringBuilder(); + historyMsg.append("

Browser Data on ").append(dataSource.getName()).append(":

    \n"); + for (Extract module : browserModules) { + historyMsg.append("
  • ").append(module.getName()); + historyMsg.append(": ").append((module.foundData()) ? " Found." : " Not Found."); + historyMsg.append("
  • "); + } + historyMsg.append("
"); + final IngestMessage inboxMsg = IngestMessage.createMessage(++messageId, MessageType.INFO, this, dataSource.getName() + " - Browser Results", historyMsg.toString()); + services.postMessage(inboxMsg); } @Override @@ -139,6 +152,7 @@ public final class RAImageIngestModule extends IngestModuleDataSource { @Override public void init(IngestModuleInit initContext) { modules = new ArrayList<>(); + browserModules = new ArrayList(); logger.log(Level.INFO, "init() {0}", this.toString()); services = IngestServices.getDefault(); @@ -150,9 +164,16 @@ public final class RAImageIngestModule extends IngestModuleDataSource { modules.add(chrome); modules.add(firefox); - modules.add(registry); modules.add(iexplore); + // this needs to run after the web browser modules modules.add(SEUQA); + + // this runs last because it is slowest + modules.add(registry); + + browserModules.add(chrome); + browserModules.add(firefox); + browserModules.add(iexplore); for (Extract module : modules) { try { diff --git a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/SEUQAMappings.xml b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/SEUQAMappings.xml index 786192b39b..fea8cea8d8 100644 --- a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/SEUQAMappings.xml +++ b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/SEUQAMappings.xml @@ -33,6 +33,19 @@ splitToken: + + + + + + + + + + + + + diff --git a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/SearchEngineURLQueryAnalyzer.java b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/SearchEngineURLQueryAnalyzer.java index 6b357e0d33..6e9a11a85c 100644 --- a/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/SearchEngineURLQueryAnalyzer.java +++ b/RecentActivity/src/org/sleuthkit/autopsy/recentactivity/SearchEngineURLQueryAnalyzer.java @@ -27,7 +27,6 @@ import java.net.URLDecoder; import java.util.ArrayList; import java.util.Collection; import java.util.HashMap; -import java.util.List; import java.util.Map; import java.util.Set; import java.util.logging.Level; @@ -48,7 +47,6 @@ import org.sleuthkit.datamodel.BlackboardArtifact.ARTIFACT_TYPE; import org.sleuthkit.datamodel.BlackboardAttribute; import org.sleuthkit.datamodel.BlackboardAttribute.ATTRIBUTE_TYPE; import org.sleuthkit.datamodel.Content; -import org.sleuthkit.datamodel.FsContent; import org.sleuthkit.datamodel.TskException; import org.w3c.dom.Document; import org.w3c.dom.NamedNodeMap; diff --git a/ScalpelCarver/nbproject/project.xml b/ScalpelCarver/nbproject/project.xml index d0f610f870..09a3bb2f87 100644 --- a/ScalpelCarver/nbproject/project.xml +++ b/ScalpelCarver/nbproject/project.xml @@ -1,22 +1,22 @@ - - - org.netbeans.modules.apisupport.project - - - org.sleuthkit.autopsy.scalpel - - - - org.sleuthkit.autopsy.core - - - - 9 - 7.0 - - - - - - - + + + org.netbeans.modules.apisupport.project + + + org.sleuthkit.autopsy.scalpel + + + + org.sleuthkit.autopsy.core + + + + 9 + 7.0 + + + + + + + diff --git a/ScalpelCarver/src/org/sleuthkit/autopsy/scalpel/ScalpelCarverIngestModule.java b/ScalpelCarver/src/org/sleuthkit/autopsy/scalpel/ScalpelCarverIngestModule.java index ba4b0935e0..8c9e7c1fde 100644 --- a/ScalpelCarver/src/org/sleuthkit/autopsy/scalpel/ScalpelCarverIngestModule.java +++ b/ScalpelCarver/src/org/sleuthkit/autopsy/scalpel/ScalpelCarverIngestModule.java @@ -26,6 +26,7 @@ import java.util.logging.Level; import org.sleuthkit.autopsy.casemodule.Case; import org.sleuthkit.autopsy.coreutils.Logger; import org.sleuthkit.autopsy.coreutils.PlatformUtil; +import org.sleuthkit.autopsy.coreutils.Version; import org.sleuthkit.autopsy.ingest.IngestModuleAbstractFile; import org.sleuthkit.autopsy.ingest.IngestModuleAbstractFile.ProcessResult; import org.sleuthkit.autopsy.ingest.IngestModuleInit; @@ -56,7 +57,7 @@ public class ScalpelCarverIngestModule { // extends IngestModuleAbstractFile { / private static ScalpelCarverIngestModule instance; private final String MODULE_NAME = "Scalpel Carver"; private final String MODULE_DESCRIPTION = "Carves files from unallocated space at ingest time.\nCarved files are reanalyzed and displayed in the directory tree."; - private final String MODULE_VERSION = "1.0"; + private final String MODULE_VERSION = Version.getVersion(); private final String MODULE_OUTPUT_DIR_NAME = "ScalpelCarver"; private String moduleOutputDirPath; private String configFileName = "scalpel.conf"; diff --git a/SevenZip/manifest.mf b/SevenZip/manifest.mf index 9989549bec..ca53e48be8 100644 --- a/SevenZip/manifest.mf +++ b/SevenZip/manifest.mf @@ -1,6 +1,6 @@ -Manifest-Version: 1.0 -OpenIDE-Module: org.sleuthkit.autopsy.sevenzip/1 -OpenIDE-Module-Implementation-Version: 3 -OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/sevenzip/Bundle.properties - - +Manifest-Version: 1.0 +OpenIDE-Module: org.sleuthkit.autopsy.sevenzip/1 +OpenIDE-Module-Implementation-Version: 3 +OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/sevenzip/Bundle.properties + + diff --git a/SevenZip/nbproject/project.xml b/SevenZip/nbproject/project.xml index 2a2f30e675..ef6d94d674 100644 --- a/SevenZip/nbproject/project.xml +++ b/SevenZip/nbproject/project.xml @@ -1,48 +1,48 @@ - - - org.netbeans.modules.apisupport.project - - - org.sleuthkit.autopsy.sevenzip - - - - org.netbeans.api.progress - - - - 1 - 1.32.1 - - - - org.sleuthkit.autopsy.core - - - - 9 - 7.0 - - - - org.sleuthkit.autopsy.corelibs - - - - 3 - 1.1 - - - - - - ext/sevenzipjbinding.jar - release/modules/ext/sevenzipjbinding.jar - - - ext/sevenzipjbinding-AllPlatforms.jar - release/modules/ext/sevenzipjbinding-AllPlatforms.jar - - - - + + + org.netbeans.modules.apisupport.project + + + org.sleuthkit.autopsy.sevenzip + + + + org.netbeans.api.progress + + + + 1 + 1.32.1 + + + + org.sleuthkit.autopsy.core + + + + 9 + 7.0 + + + + org.sleuthkit.autopsy.corelibs + + + + 3 + 1.1 + + + + + + ext/sevenzipjbinding.jar + release/modules/ext/sevenzipjbinding.jar + + + ext/sevenzipjbinding-AllPlatforms.jar + release/modules/ext/sevenzipjbinding-AllPlatforms.jar + + + + diff --git a/SevenZip/src/org/sleuthkit/autopsy/sevenzip/SevenZipIngestModule.java b/SevenZip/src/org/sleuthkit/autopsy/sevenzip/SevenZipIngestModule.java index 991662764d..322f7c669f 100644 --- a/SevenZip/src/org/sleuthkit/autopsy/sevenzip/SevenZipIngestModule.java +++ b/SevenZip/src/org/sleuthkit/autopsy/sevenzip/SevenZipIngestModule.java @@ -30,7 +30,6 @@ import java.util.Collections; import java.util.Date; import java.util.List; import java.util.logging.Level; -import javax.swing.JPanel; import net.sf.sevenzipjbinding.ISequentialOutStream; import net.sf.sevenzipjbinding.ISevenZipInArchive; import org.sleuthkit.autopsy.coreutils.Logger; @@ -47,12 +46,12 @@ import org.netbeans.api.progress.ProgressHandle; import org.netbeans.api.progress.ProgressHandleFactory; import org.sleuthkit.autopsy.casemodule.Case; import org.sleuthkit.autopsy.casemodule.services.FileManager; +import org.sleuthkit.autopsy.coreutils.Version; import org.sleuthkit.autopsy.ingest.PipelineContext; import org.sleuthkit.autopsy.ingest.IngestMessage; import org.sleuthkit.autopsy.ingest.IngestMonitor; import org.sleuthkit.autopsy.ingest.ModuleContentEvent; import org.sleuthkit.datamodel.BlackboardArtifact; -import org.sleuthkit.datamodel.BlackboardArtifact.ARTIFACT_TYPE; import org.sleuthkit.datamodel.BlackboardAttribute; import org.sleuthkit.datamodel.BlackboardAttribute.ATTRIBUTE_TYPE; import org.sleuthkit.datamodel.DerivedFile; @@ -71,10 +70,9 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { private static final Logger logger = Logger.getLogger(SevenZipIngestModule.class.getName()); public static final String MODULE_NAME = "Archive Extractor"; public static final String MODULE_DESCRIPTION = "Extracts archive files (zip, rar, arj, 7z, gzip, bzip2, tar), reschedules them to current ingest and populates directory tree with new files."; - final public static String MODULE_VERSION = "1.0"; + final public static String MODULE_VERSION = Version.getVersion(); private IngestServices services; private volatile int messageID = 0; - private int processedFiles = 0; private boolean initialized = false; private static SevenZipIngestModule instance = null; //TODO use content type detection instead of extensions @@ -115,7 +113,6 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { @Override public void init(IngestModuleInit initContext) { - logger.log(Level.INFO, "init()"); services = IngestServices.getDefault(); initialized = false; @@ -136,7 +133,7 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { String details = "Error initializing output dir: " + unpackDirPath + ": " + e.getMessage(); //MessageNotifyUtil.Notify.error(msg, details); services.postMessage(IngestMessage.createErrorMessage(++messageID, instance, msg, details)); - return; + throw e; } } @@ -150,7 +147,7 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { String details = "Could not initialize 7-ZIP library: " + e.getMessage(); //MessageNotifyUtil.Notify.error(msg, details); services.postMessage(IngestMessage.createErrorMessage(++messageID, instance, msg, details)); - return; + throw new RuntimeException(e); } archiveDepthCountTree = new ArchiveDepthCountTree(); @@ -185,8 +182,8 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { try { if (abstractFile.hasChildren()) { //check if local unpacked dir exists - final String localRootPath = getLocalRootRelPath(abstractFile); - final String localRootAbsPath = getLocalRootAbsPath(localRootPath); + final String uniqueFileName = getUniqueName(abstractFile); + final String localRootAbsPath = getLocalRootAbsPath(uniqueFileName); if (new File(localRootAbsPath).exists()) { logger.log(Level.INFO, "File already has been processed as it has children and local unpacked file, skipping: " + abstractFile.getName()); return ProcessResult.OK; @@ -197,10 +194,7 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { return ProcessResult.OK; } - logger.log(Level.INFO, "Processing with " + MODULE_NAME + ": " + abstractFile.getName()); - ++processedFiles; - List unpackedFiles = unpack(abstractFile); if (!unpackedFiles.isEmpty()) { @@ -208,8 +202,6 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { rescheduleNewFiles(pipelineContext, unpackedFiles); } - //process, return error if occurred - return ProcessResult.OK; } @@ -230,7 +222,7 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { * @param archiveFile * @return */ - private String getLocalRootRelPath(AbstractFile archiveFile) { + private String getUniqueName(AbstractFile archiveFile) { return archiveFile.getName() + "_" + archiveFile.getId(); } @@ -238,7 +230,7 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { * Get local abs path to the unpacked archive root * * @param localRootRelPath relative path to archive, from - * getLocalRootRelPath() + * getUniqueName() * @return */ private String getLocalRootAbsPath(String localRootRelPath) { @@ -297,10 +289,6 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { logger.log(Level.SEVERE, "Error getting archive item size and cannot detect if zipbomb. ", ex); return false; } - - - - } /** @@ -350,8 +338,8 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { final ISimpleInArchive simpleInArchive = inArchive.getSimpleInterface(); //setup the archive local root folder - final String localRootPath = getLocalRootRelPath(archiveFile); - final String localRootAbsPath = getLocalRootAbsPath(localRootPath); + final String uniqueFileName = getUniqueName(archiveFile); + final String localRootAbsPath = getLocalRootAbsPath(uniqueFileName); final File localRoot = new File(localRootAbsPath); if (!localRoot.exists()) { try { @@ -364,7 +352,7 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { } //initialize tree hierarchy to keep track of unpacked file structure - UnpackedTree uTree = new UnpackedTree(unpackDir + "/" + localRootPath, archiveFile, fileManager); + UnpackedTree uTree = new UnpackedTree(unpackDir + "/" + uniqueFileName, archiveFile, fileManager); long freeDiskSpace = services.getFreeDiskSpace(); @@ -453,7 +441,7 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { } } - final String localFileRelPath = localRootPath + File.separator + extractedPath; + final String localFileRelPath = uniqueFileName + File.separator + extractedPath; //final String localRelPath = unpackDir + File.separator + localFileRelPath; final String localAbsPath = unpackDirPath + File.separator + localFileRelPath; @@ -565,9 +553,11 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { if (hasEncrypted) { String encryptionType = fullEncryption ? ENCRYPTION_FULL : ENCRYPTION_FILE_LEVEL; try { - BlackboardArtifact generalInfo = archiveFile.newArtifact(ARTIFACT_TYPE.TSK_GEN_INFO); + BlackboardArtifact generalInfo = archiveFile.getGenInfoArtifact(); generalInfo.addAttribute(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_ENCRYPTION_DETECTED.getTypeID(), MODULE_NAME, encryptionType)); + //@@@ We don't fire here because GEN_INFO isn't displayed in the tree.... Need to address how these should be displayed + //services.fireModuleDataEvent(new ModuleDataEvent(MODULE_NAME, BlackboardArtifact.ARTIFACT_TYPE.TSK_METADATA_EXIF)); } catch (TskCoreException ex) { logger.log(Level.SEVERE, "Error creating blackboard artifact for encryption detected for file: " + archiveFile, ex); } @@ -580,29 +570,20 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { services.postMessage(IngestMessage.createWarningMessage(++messageID, instance, msg, details)); } - return unpackedFiles; } @Override public void complete() { - logger.log(Level.INFO, "complete()"); if (initialized == false) { return; } - - //cleanup if any - archiveDepthCountTree = null; - + archiveDepthCountTree = null; } @Override public void stop() { - logger.log(Level.INFO, "stop()"); - - //cleanup if any archiveDepthCountTree = null; - } @Override @@ -626,13 +607,13 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { return false; } - - public boolean isSupported(AbstractFile file) { + private boolean isSupported(AbstractFile file) { String fileNameLower = file.getName().toLowerCase(); int dotI = fileNameLower.lastIndexOf("."); if (dotI == -1 || dotI == fileNameLower.length() - 1) { return false; //no extension } + final String extension = fileNameLower.substring(dotI + 1); for (int i = 0; i < SUPPORTED_EXTENSIONS.length; ++i) { if (extension.equals(SUPPORTED_EXTENSIONS[i])) { @@ -643,7 +624,6 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { //if no extension match, check for zip signature //(note, in near future, we will use pre-detected content type) return isZipFileHeader(file); - } /** @@ -672,7 +652,6 @@ public final class SevenZipIngestModule extends IngestModuleAbstractFile { int signature = bytes.getInt(); return signature == ZIP_SIGNATURE_BE; - } /** diff --git a/Testing/manifest.mf b/Testing/manifest.mf index 53c457afbb..381f4bb133 100644 --- a/Testing/manifest.mf +++ b/Testing/manifest.mf @@ -1,6 +1,6 @@ -Manifest-Version: 1.0 -AutoUpdate-Show-In-Client: false -OpenIDE-Module: org.sleuthkit.autopsy.testing/3 -OpenIDE-Module-Implementation-Version: 7 -OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/testing/Bundle.properties - +Manifest-Version: 1.0 +AutoUpdate-Show-In-Client: false +OpenIDE-Module: org.sleuthkit.autopsy.testing/3 +OpenIDE-Module-Implementation-Version: 7 +OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/testing/Bundle.properties + diff --git a/Testing/nbproject/project.xml b/Testing/nbproject/project.xml index e9621dd78b..f685034d4a 100644 --- a/Testing/nbproject/project.xml +++ b/Testing/nbproject/project.xml @@ -1,69 +1,69 @@ - - - org.netbeans.modules.apisupport.project - - - org.sleuthkit.autopsy.testing - - - - org.sleuthkit.autopsy.core - - - - 9 - 7.0 - - - - org.sleuthkit.autopsy.keywordsearch - - - - 5 - 3.2 - - - - - - qa-functional - - org.netbeans.libs.junit4 - - - - org.netbeans.modules.jellytools.java - - - - org.netbeans.modules.jellytools.platform - - - - org.netbeans.modules.jemmy - - - - org.netbeans.modules.nbjunit - - - - - - unit - - org.netbeans.libs.junit4 - - - - org.netbeans.modules.nbjunit - - - - - - - - - + + + org.netbeans.modules.apisupport.project + + + org.sleuthkit.autopsy.testing + + + + org.sleuthkit.autopsy.core + + + + 9 + 7.0 + + + + org.sleuthkit.autopsy.keywordsearch + + + + 5 + 3.2 + + + + + + qa-functional + + org.netbeans.libs.junit4 + + + + org.netbeans.modules.jellytools.java + + + + org.netbeans.modules.jellytools.platform + + + + org.netbeans.modules.jemmy + + + + org.netbeans.modules.nbjunit + + + + + + unit + + org.netbeans.libs.junit4 + + + + org.netbeans.modules.nbjunit + + + + + + + + + diff --git a/Testing/src/org/sleuthkit/autopsy/testing/Bundle.properties b/Testing/src/org/sleuthkit/autopsy/testing/Bundle.properties index 023a96d380..125ec1c485 100644 --- a/Testing/src/org/sleuthkit/autopsy/testing/Bundle.properties +++ b/Testing/src/org/sleuthkit/autopsy/testing/Bundle.properties @@ -1 +1 @@ -OpenIDE-Module-Name=Testing +OpenIDE-Module-Name=Testing diff --git a/Timeline/manifest.mf b/Timeline/manifest.mf index 3210242336..6cc867f901 100644 --- a/Timeline/manifest.mf +++ b/Timeline/manifest.mf @@ -1,7 +1,7 @@ -Manifest-Version: 1.0 -OpenIDE-Module: org.sleuthkit.autopsy.timeline/1 -OpenIDE-Module-Layer: org/sleuthkit/autopsy/timeline/layer.xml -OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/timeline/Bundle.properties -OpenIDE-Module-Requires: org.openide.windows.WindowManager -OpenIDE-Module-Implementation-Version: 3 - +Manifest-Version: 1.0 +OpenIDE-Module: org.sleuthkit.autopsy.timeline/1 +OpenIDE-Module-Layer: org/sleuthkit/autopsy/timeline/layer.xml +OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/timeline/Bundle.properties +OpenIDE-Module-Requires: org.openide.windows.WindowManager +OpenIDE-Module-Implementation-Version: 3 + diff --git a/Timeline/nbproject/project.xml b/Timeline/nbproject/project.xml index f21e7f63ba..352e0fd55f 100644 --- a/Timeline/nbproject/project.xml +++ b/Timeline/nbproject/project.xml @@ -1,113 +1,113 @@ - - - org.netbeans.modules.apisupport.project - - - org.sleuthkit.autopsy.timeline - - - - org.netbeans.api.progress - - - - 1 - 1.32.1 - - - - org.netbeans.modules.settings - - - - 1 - 1.35.1 - - - - org.openide.actions - - - - 6.26.1 - - - - org.openide.awt - - - - 7.46.1 - - - - org.openide.dialogs - - - - 7.25.1 - - - - org.openide.modules - - - - 7.32.1 - - - - org.openide.nodes - - - - 7.28.1 - - - - org.openide.util - - - - 8.25.2 - - - - org.openide.util.lookup - - - - 8.15.2 - - - - org.openide.windows - - - - 6.55.2 - - - - org.sleuthkit.autopsy.core - - - - 9 - 7.0 - - - - org.sleuthkit.autopsy.corelibs - - - - 3 - 1.1 - - - - - - - + + + org.netbeans.modules.apisupport.project + + + org.sleuthkit.autopsy.timeline + + + + org.netbeans.api.progress + + + + 1 + 1.32.1 + + + + org.netbeans.modules.settings + + + + 1 + 1.35.1 + + + + org.openide.actions + + + + 6.26.1 + + + + org.openide.awt + + + + 7.46.1 + + + + org.openide.dialogs + + + + 7.25.1 + + + + org.openide.modules + + + + 7.32.1 + + + + org.openide.nodes + + + + 7.28.1 + + + + org.openide.util + + + + 8.25.2 + + + + org.openide.util.lookup + + + + 8.15.2 + + + + org.openide.windows + + + + 6.55.2 + + + + org.sleuthkit.autopsy.core + + + + 9 + 7.0 + + + + org.sleuthkit.autopsy.corelibs + + + + 3 + 1.1 + + + + + + + diff --git a/Timeline/src/org/sleuthkit/autopsy/timeline/Timeline.java b/Timeline/src/org/sleuthkit/autopsy/timeline/Timeline.java index 9f027c53ce..190aa2a7d5 100644 --- a/Timeline/src/org/sleuthkit/autopsy/timeline/Timeline.java +++ b/Timeline/src/org/sleuthkit/autopsy/timeline/Timeline.java @@ -1,1177 +1,1177 @@ -/* - * Autopsy Forensic Browser - * - * Copyright 2013 Basis Technology Corp. - * Contact: carrier sleuthkit org - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.sleuthkit.autopsy.timeline; - -import java.awt.Component; -import java.awt.Cursor; -import java.awt.Dimension; -import java.awt.EventQueue; -import java.beans.PropertyChangeEvent; -import java.beans.PropertyChangeListener; -import java.io.BufferedWriter; -import java.io.FileInputStream; -import java.io.FileNotFoundException; -import java.io.FileWriter; -import java.io.IOException; -import java.io.Writer; -import java.text.DateFormat; -import java.text.DateFormatSymbols; -import java.text.ParseException; -import java.text.SimpleDateFormat; -import java.util.ArrayList; -import java.util.Calendar; -import java.util.Date; -import java.util.List; -import java.util.Locale; -import java.util.Scanner; -import java.util.Stack; -import java.util.logging.Level; -import javafx.application.Platform; -import javafx.beans.value.ChangeListener; -import javafx.beans.value.ObservableValue; -import javafx.collections.FXCollections; -import javafx.collections.ObservableList; -import javafx.embed.swing.JFXPanel; -import javafx.event.ActionEvent; -import javafx.event.EventHandler; -import javafx.geometry.Pos; -import javafx.scene.Group; -import javafx.scene.Scene; -import javafx.scene.chart.BarChart; -import javafx.scene.chart.CategoryAxis; -import javafx.scene.chart.NumberAxis; -import javafx.scene.control.Button; -import javafx.scene.control.ComboBox; -import javafx.scene.control.Label; -import javafx.scene.control.ScrollPane; -import javafx.scene.input.MouseButton; -import javafx.scene.input.MouseEvent; -import javafx.scene.layout.HBox; -import javafx.scene.layout.VBox; -import javafx.scene.paint.Color; -import javax.swing.JFrame; -import javax.swing.JOptionPane; -import javax.swing.SwingUtilities; -import org.netbeans.api.progress.ProgressHandle; -import org.netbeans.api.progress.ProgressHandleFactory; -import org.openide.awt.ActionID; -import org.openide.awt.ActionReference; -import org.openide.awt.ActionReferences; -import org.openide.awt.ActionRegistration; -import org.openide.modules.InstalledFileLocator; -import org.openide.modules.ModuleInstall; -import org.openide.nodes.Children; -import org.openide.nodes.Node; -import org.openide.util.HelpCtx; -import org.openide.util.NbBundle; -import org.openide.util.actions.CallableSystemAction; -import org.openide.util.actions.Presenter; -import org.openide.util.lookup.Lookups; -import org.openide.windows.WindowManager; -import org.sleuthkit.autopsy.casemodule.Case; -import org.sleuthkit.autopsy.core.Installer; -import org.sleuthkit.autopsy.corecomponents.DataContentPanel; -import org.sleuthkit.autopsy.corecomponents.DataResultPanel; -import org.sleuthkit.autopsy.coreutils.Logger; -import org.sleuthkit.autopsy.coreutils.PlatformUtil; -import org.sleuthkit.autopsy.datamodel.FilterNodeLeaf; -import org.sleuthkit.autopsy.datamodel.DirectoryNode; -import org.sleuthkit.autopsy.datamodel.DisplayableItemNode; -import org.sleuthkit.autopsy.datamodel.DisplayableItemNodeVisitor; -import org.sleuthkit.autopsy.datamodel.FileNode; -import org.sleuthkit.autopsy.ingest.IngestManager; -import org.sleuthkit.autopsy.coreutils.ExecUtil; -import org.sleuthkit.datamodel.AbstractFile; -import org.sleuthkit.datamodel.SleuthkitCase; -import org.sleuthkit.datamodel.TskCoreException; - -@ActionID(category = "Tools", id = "org.sleuthkit.autopsy.timeline.Timeline") -@ActionRegistration(displayName = "#CTL_MakeTimeline", lazy = false) -@ActionReferences(value = { - @ActionReference(path = "Menu/Tools", position = 100)}) -@NbBundle.Messages(value = "CTL_TimelineView=Generate Timeline") -/** - * The Timeline Action entry point. Collects data and pushes data to javafx - * widgets - * - */ -public class Timeline extends CallableSystemAction implements Presenter.Toolbar, PropertyChangeListener { - - private static final Logger logger = Logger.getLogger(Timeline.class.getName()); - private final java.io.File macRoot = InstalledFileLocator.getDefault().locate("mactime", Timeline.class.getPackage().getName(), false); - private TimelineFrame mainFrame; //frame for holding all the elements - private Group fxGroupCharts; //Orders the charts - private Scene fxSceneCharts; //Displays the charts - private HBox fxHBoxCharts; //Holds the navigation buttons in horiztonal fashion. - private VBox fxVBox; //Holds the JavaFX Elements in vertical fashion. - private JFXPanel fxPanelCharts; //FX panel to hold the group - private BarChart fxChartEvents; //Yearly/Monthly events - Bar chart - private ScrollPane fxScrollEvents; //Scroll Panes for dealing with oversized an oversized chart - private static final int FRAME_HEIGHT = 700; //Sizing constants - private static final int FRAME_WIDTH = 1200; - private Button fxZoomOutButton; //Navigation buttons - private ComboBox fxDropdownSelectYears; //Dropdown box for selecting years. Useful when the charts' scale means some years are unclickable, despite having events. - private final Stack> fxStackPrevCharts = new Stack>(); //Stack for storing drill-up information. - private BarChart fxChartTopLevel; //the topmost chart, used for resetting to default view. - private DataResultPanel dataResultPanel; - private DataContentPanel dataContentPanel; - private ProgressHandle progress; - private java.io.File moduleDir; - private String mactimeFileName; - private List data; - private boolean listeningToAddImage = false; - private long lastObjectId = -1; - private TimelineProgressDialog progressDialog; - private EventHandler fxMouseEnteredListener; - private EventHandler fxMouseExitedListener; - private SleuthkitCase skCase; - private boolean fxInited = false; - - public Timeline() { - super(); - - fxInited = Installer.isJavaFxInited(); - - } - - //Swing components and JavafX components don't play super well together - //Swing components need to be initialized first, in the swing specific thread - //Next, the javafx components may be initialized. - private void customize() { - - //listeners - fxMouseEnteredListener = new EventHandler() { - @Override - public void handle(MouseEvent e) { - fxPanelCharts.setCursor(Cursor.getPredefinedCursor(Cursor.HAND_CURSOR)); - } - }; - fxMouseExitedListener = new EventHandler() { - @Override - public void handle(MouseEvent e) { - fxPanelCharts.setCursor(null); - } - }; - - SwingUtilities.invokeLater(new Runnable() { - @Override - public void run() { - //Making the main frame * - - mainFrame = new TimelineFrame(); - mainFrame.setFrameName(Case.getCurrentCase().getName() + " - Autopsy Timeline (Beta)"); - - //use the same icon on jframe as main application - mainFrame.setIconImage(WindowManager.getDefault().getMainWindow().getIconImage()); - mainFrame.setFrameSize(new Dimension(FRAME_WIDTH, FRAME_HEIGHT)); //(Width, Height) - - - dataContentPanel = DataContentPanel.createInstance(); - //dataContentPanel.setAlignmentX(Component.RIGHT_ALIGNMENT); - //dataContentPanel.setPreferredSize(new Dimension(FRAME_WIDTH, (int) (FRAME_HEIGHT * 0.4))); - - dataResultPanel = DataResultPanel.createInstance("Timeline Results", "", Node.EMPTY, 0, dataContentPanel); - dataResultPanel.setContentViewer(dataContentPanel); - //dataResultPanel.setAlignmentX(Component.LEFT_ALIGNMENT); - //dataResultPanel.setPreferredSize(new Dimension((int)(FRAME_WIDTH * 0.5), (int) (FRAME_HEIGHT * 0.5))); - logger.log(Level.INFO, "Successfully created viewers"); - - mainFrame.setBottomLeftPanel(dataResultPanel); - mainFrame.setBottomRightPanel(dataContentPanel); - - runJavaFxThread(); - } - }); - - - } - - private void runJavaFxThread() { - //JavaFX thread - //JavaFX components MUST be run in the JavaFX thread, otherwise massive amounts of exceptions will be thrown and caught. Liable to freeze up and crash. - //Components can be declared whenever, but initialization and manipulation must take place here. - Platform.runLater(new Runnable() { - @Override - public void run() { - try { - // start the progress bar - progress = ProgressHandleFactory.createHandle("Creating timeline . . ."); - progress.start(); - - fxChartEvents = null; //important to reset old data - fxPanelCharts = new JFXPanel(); - fxGroupCharts = new Group(); - fxSceneCharts = new Scene(fxGroupCharts, FRAME_WIDTH, FRAME_HEIGHT * 0.6); //Width, Height - fxVBox = new VBox(5); - fxVBox.setAlignment(Pos.BOTTOM_CENTER); - fxHBoxCharts = new HBox(10); - fxHBoxCharts.setAlignment(Pos.BOTTOM_CENTER); - - //Initializing default values for the scroll pane - fxScrollEvents = new ScrollPane(); - fxScrollEvents.setPrefSize(FRAME_WIDTH, FRAME_HEIGHT * 0.6); //Width, Height - fxScrollEvents.setContent(null); //Needs some content, otherwise it crashes - - // set up moduleDir - moduleDir = new java.io.File(Case.getCurrentCase().getModulesOutputDirAbsPath() + java.io.File.separator + "timeline"); - if (!moduleDir.exists()) { - moduleDir.mkdir(); - } - - int currentProgress = 0; - java.io.File mactimeFile = new java.io.File(moduleDir, mactimeFileName); - if (!mactimeFile.exists()) { - progressDialog.setProgressTotal(3); //total 3 units - logger.log(Level.INFO, "Creating body file"); - progressDialog.updateProgressBar("Generating Bodyfile"); - String bodyFilePath = makeBodyFile(); - progressDialog.updateProgressBar(++currentProgress); - logger.log(Level.INFO, "Creating mactime file: " + mactimeFile.getAbsolutePath()); - progressDialog.updateProgressBar("Generating Mactime"); - makeMacTime(bodyFilePath); - progressDialog.updateProgressBar(++currentProgress); - data = null; - } else { - progressDialog.setProgressTotal(1); //total 1 units - logger.log(Level.INFO, "Mactime file already exists; parsing that: " + mactimeFile.getAbsolutePath()); - } - - - progressDialog.updateProgressBar("Parsing Mactime"); - if (data == null) { - logger.log(Level.INFO, "Parsing mactime file: " + mactimeFile.getAbsolutePath()); - data = parseMacTime(mactimeFile); //The sum total of the mactime parsing. YearEpochs contain everything you need to make a timeline. - } - progressDialog.updateProgressBar(++currentProgress); - - //Making a dropdown box to select years. - List lsi = new ArrayList(); //List is in the format of {Year : Number of Events}, used for selecting from the dropdown. - for (YearEpoch ye : data) { - lsi.add(ye.year + " : " + ye.getNumFiles()); - } - ObservableList listSelect = FXCollections.observableArrayList(lsi); - fxDropdownSelectYears = new ComboBox(listSelect); - - //Buttons for navigating up and down the timeline - fxZoomOutButton = new Button("Zoom Out"); - fxZoomOutButton.setOnAction(new EventHandler() { - @Override - public void handle(ActionEvent e) { - BarChart bc; - if (fxStackPrevCharts.size() == 0) { - bc = fxChartTopLevel; - } else { - bc = fxStackPrevCharts.pop(); - } - fxChartEvents = bc; - fxScrollEvents.setContent(fxChartEvents); - } - }); - - fxDropdownSelectYears.getSelectionModel().selectedItemProperty().addListener(new ChangeListener() { - @Override - public void changed(ObservableValue ov, String t, String t1) { - if (fxDropdownSelectYears.getValue() != null) { - mainFrame.setTopComponentCursor(Cursor.getPredefinedCursor(Cursor.WAIT_CURSOR)); - try { - fxChartEvents = createMonthsWithDrill(findYear(data, Integer.valueOf(fxDropdownSelectYears.getValue().split(" ")[0]))); - fxScrollEvents.setContent(fxChartEvents); - } finally { - mainFrame.setTopComponentCursor(null); - } - } - } - }); - - //Adding things to the V and H boxes. - //hBox_Charts stores the pseudo menu bar at the top of the timeline. |Zoom Out|View Year: [Select Year]|â–º| - fxHBoxCharts.getChildren().addAll(fxZoomOutButton, new Label("Go To:"), fxDropdownSelectYears); - fxVBox.getChildren().addAll(fxHBoxCharts, fxScrollEvents); //FxBox_V holds things in a visual stack. - fxGroupCharts.getChildren().add(fxVBox); //Adding the FxBox to the group. Groups make things easier to manipulate without having to update a hundred things every change. - fxPanelCharts.setScene(fxSceneCharts); - - - fxPanelCharts.setAlignmentX(Component.LEFT_ALIGNMENT); - - fxChartTopLevel = createYearChartWithDrill(data); - fxChartEvents = fxChartTopLevel; - fxScrollEvents.setContent(fxChartEvents); - - EventQueue.invokeLater(new Runnable() { - @Override - public void run() { - mainFrame.setTopPanel(fxPanelCharts); - dataResultPanel.open(); - //mainFrame.pack(); - mainFrame.setVisible(true); - } - }); - } finally { - // stop the progress bar - progress.finish(); - - // close the progressDialog - progressDialog.doClose(0); - } - } - }); - } - - /** - * Creates a BarChart with datapoints for all the years from the parsed - * mactime file. - * - * @param allYears The list of years that have barData from the mactime file - * @return BarChart scaled to the year level - */ - private BarChart createYearChartWithDrill(final List allYears) { - final CategoryAxis xAxis = new CategoryAxis(); //Axes are very specific types. Categorys are strings. - final NumberAxis yAxis = new NumberAxis(); - final Label l = new Label(""); - l.setStyle("-fx-font: 24 arial;"); - l.setTextFill(Color.AZURE); - xAxis.setLabel("Years"); - yAxis.setLabel("Number of Events"); - //Charts are made up of individual pieces of Chart.Data. In this case, a piece of barData is a single bar on the graph. - //Data is packaged into a series, which can be assigned custom colors or styling - //After the series are created, 1 or more series are packaged into a single chart. - ObservableList> bcData = FXCollections.observableArrayList(); - BarChart.Series se = new BarChart.Series(); - if (allYears != null) { - for (final YearEpoch ye : allYears) { - se.getData().add(new BarChart.Data(String.valueOf(ye.year), ye.getNumFiles())); - } - } - bcData.add(se); - - - //Note: - // BarChart.Data wraps the Java Nodes class. BUT, until a BarChart.Data gets added to an actual series, it's node is null, and you can perform no operations on it. - // When the Data is added to a series(or a chart? I am unclear on where), a node is automaticaly generated for it, after which you can perform any of the operations it offers. - // In addtion, you are free to set the node to whatever you want. It wraps the most generic Node class. - // But it is for this reason that the chart generating functions have two forloops. I do not believe they can be condensed into a single loop due to the nodes being null until - // an undetermined point in time. - BarChart bc = new BarChart(xAxis, yAxis, bcData); - for (final BarChart.Data barData : bc.getData().get(0).getData()) { //.get(0) refers to the BarChart.Series class to work on. There is only one series in this graph, so get(0) is safe. - barData.getNode().setScaleX(.5); - - final javafx.scene.Node barNode = barData.getNode(); - //hover listener - barNode.addEventHandler(MouseEvent.MOUSE_ENTERED_TARGET, fxMouseEnteredListener); - barNode.addEventHandler(MouseEvent.MOUSE_EXITED_TARGET, fxMouseExitedListener); - - //click listener - barNode.addEventHandler(MouseEvent.MOUSE_CLICKED, - new EventHandler() { - @Override - public void handle(MouseEvent e) { - if (e.getButton().equals(MouseButton.PRIMARY)) { - if (e.getClickCount() == 1) { - Platform.runLater(new Runnable() { - @Override - public void run() { - BarChart b = - createMonthsWithDrill(findYear(allYears, Integer.valueOf(barData.getXValue()))); - fxChartEvents = b; - fxScrollEvents.setContent(fxChartEvents); - } - }); - - } - } - } - }); - } - - bc.autosize(); //Get an auto height - bc.setPrefWidth(FRAME_WIDTH); //but override the width - bc.setLegendVisible(false); //The legend adds too much extra chart space, it's not necessary. - return bc; - } - - /* - * Displays a chart with events from one year only, separated into 1-month chunks. - * Always 12 per year, empty months are represented by no bar. - */ - private BarChart createMonthsWithDrill(final YearEpoch ye) { - - final CategoryAxis xAxis = new CategoryAxis(); - final NumberAxis yAxis = new NumberAxis(); - xAxis.setLabel("Month (" + ye.year + ")"); - yAxis.setLabel("Number of Events"); - ObservableList> bcData = FXCollections.observableArrayList(); - - BarChart.Series se = new BarChart.Series(); - for (int monthNum = 0; monthNum < 12; ++monthNum) { - String monthName = new DateFormatSymbols().getMonths()[monthNum]; - MonthEpoch month = ye.getMonth(monthNum); - int numEvents = month == null ? 0 : month.getNumFiles(); - se.getData().add(new BarChart.Data(monthName, numEvents)); //Adding new barData at {X-pos, Y-Pos} - } - bcData.add(se); - final BarChart bc = new BarChart(xAxis, yAxis, bcData); - - for (int i = 0; i < 12; i++) { - for (final BarChart.Data barData : bc.getData().get(0).getData()) { - //Note: - // All the charts of this package have a problem where when the chart gets below a certain pixel ratio, the barData stops drawing. The axes and the labels remain, - // But the actual chart barData is invisible, unclickable, and unrendered. To partially compensate for that, barData.getNode() can be manually scaled up to increase visibility. - // Sometimes I've had it jacked up to as much as x2400 just to see a sliver of information. - // But that doesn't work all the time. Adding it to a scrollpane and letting the user scroll up and down to view the chart is the other workaround. Both of these fixes suck. - final javafx.scene.Node barNode = barData.getNode(); - barNode.setScaleX(.5); - - //hover listener - barNode.addEventHandler(MouseEvent.MOUSE_ENTERED_TARGET, fxMouseEnteredListener); - barNode.addEventHandler(MouseEvent.MOUSE_EXITED_TARGET, fxMouseExitedListener); - - //clicks - barNode.addEventHandler(MouseEvent.MOUSE_PRESSED, - new EventHandler() { - @Override - public void handle(MouseEvent e) { - if (e.getButton().equals(MouseButton.PRIMARY)) { - if (e.getClickCount() == 1) { - Platform.runLater(new Runnable() { - @Override - public void run() { - fxChartEvents = createEventsByMonth(findMonth(ye.months, monthStringToInt(barData.getXValue())), ye); - fxScrollEvents.setContent(fxChartEvents); - } - }); - } - } - } - }); - } - } - - bc.autosize(); - bc.setPrefWidth(FRAME_WIDTH); - bc.setLegendVisible(false); - fxStackPrevCharts.push(bc); - return bc; - } - - - /* - * Displays a chart with events from one month only. - * Up to 31 days per month, as low as 28 as determined by the specific MonthEpoch - */ - private BarChart createEventsByMonth(final MonthEpoch me, final YearEpoch ye) { - final CategoryAxis xAxis = new CategoryAxis(); - final NumberAxis yAxis = new NumberAxis(); - xAxis.setLabel("Day of Month"); - yAxis.setLabel("Number of Events"); - ObservableList> bcData = makeObservableListByMonthAllDays(me, ye.getYear()); - BarChart.Series series = new BarChart.Series(bcData); - series.setName(me.getMonthName() + " " + ye.getYear()); - - - ObservableList> ol = - FXCollections.>observableArrayList(series); - - final BarChart bc = new BarChart(xAxis, yAxis, ol); - for (final BarChart.Data barData : bc.getData().get(0).getData()) { - //data.getNode().setScaleX(2); - - final javafx.scene.Node barNode = barData.getNode(); - - //hover listener - barNode.addEventHandler(MouseEvent.MOUSE_ENTERED_TARGET, fxMouseEnteredListener); - barNode.addEventHandler(MouseEvent.MOUSE_EXITED_TARGET, fxMouseExitedListener); - - barNode.addEventHandler(MouseEvent.MOUSE_PRESSED, - new EventHandler() { - MonthEpoch myme = me; - - @Override - public void handle(MouseEvent e) { - SwingUtilities.invokeLater(new Runnable() { - @Override - public void run() { - //reset the view and free the current nodes before loading new ones - final FileRootNode d = new FileRootNode("Empty Root", new ArrayList()); - dataResultPanel.setNode(d); - dataResultPanel.setPath("Loading..."); - } - }); - final int day = (Integer.valueOf((barData.getXValue()).split("-")[1])); - final DayEpoch de = myme.getDay(day); - final List afs; - if (de != null) { - afs = de.getEvents(); - } else { - logger.log(Level.SEVERE, "There were no events for the clicked-on day: " + day); - return; - } - - SwingUtilities.invokeLater(new Runnable() { - @Override - public void run() { - final FileRootNode d = new FileRootNode("Root", afs); - dataResultPanel.setNode(d); - //set result viewer title path with the current date - String dateString = ye.getYear() + "-" + (1 + me.getMonthInt()) + "-" + +de.dayNum; - dataResultPanel.setPath(dateString); - } - }); - - - } - }); - } - bc.autosize(); - bc.setPrefWidth(FRAME_WIDTH); - return bc; - } - - private static ObservableList> makeObservableListByMonthAllDays(final MonthEpoch me, int year) { - ObservableList> bcData = FXCollections.observableArrayList(); - int totalDays = me.getTotalNumDays(year); - for (int i = 1; i <= totalDays; ++i) { - DayEpoch day = me.getDay(i); - int numFiles = day == null ? 0 : day.getNumFiles(); - BarChart.Data d = new BarChart.Data(me.month + 1 + "-" + i, numFiles); - d.setExtraValue(me); - bcData.add(d); - } - return bcData; - } - - /* - * Section for Utility functions - */ - /** - * - * @param mon The month to convert. Must be minimum 4 characters long - * "February" and "Febr" are acceptable. - * @return The integer value of the month. February = 1, July = 6 - */ - private static int monthStringToInt(String mon) { - try { - Date date = new SimpleDateFormat("MMMM", Locale.ENGLISH).parse(mon); - Calendar cal = Calendar.getInstance(); - cal.setTime(date); - return cal.get(Calendar.MONTH); - } catch (ParseException ex) { - logger.log(Level.WARNING, "Unable to convert string " + mon + " to integer", ex); - return -1; - } - } - - /** - * Used for finding the proper month in a list of available months - * - * @param lst The list of months to search through. It is assumed that the - * desired match is in this list. - * @param match The month, in integer format, to retrieve. - * @return The month epoch as specified by match. - */ - private static MonthEpoch findMonth(List lst, int match) { - for (MonthEpoch e : lst) { - if (e.month == match) { - return e; - } - } - return null; - } - - /** - * Used for finding the proper year in a list of available years - * - * @param lst The list of years to search through. It is assumed that the - * desired match is in this list. - * @param match The year to retrieve. - * @return The year epoch as specified by match. - */ - private static YearEpoch findYear(List lst, int match) { - for (YearEpoch e : lst) { - if (e.year == match) { - return e; - } - } - return null; - } - - @Override - public void propertyChange(PropertyChangeEvent evt) { - String prop = evt.getPropertyName(); - if (prop.equals(Case.CASE_ADD_DATA_SOURCE)) { - if (mainFrame != null && !mainFrame.isVisible()) { - // change the lastObjectId to trigger a reparse of mactime barData - ++lastObjectId; - return; - } - - int answer = JOptionPane.showConfirmDialog(mainFrame, "Timeline is out of date. Would you like to regenerate it?", "Select an option", JOptionPane.YES_NO_OPTION); - if (answer != JOptionPane.YES_OPTION) { - return; - } - - clearMactimeData(); - - // call performAction as if the user selected 'Make Timeline' from the menu - performAction(); - } else if (prop.equals(Case.CASE_CURRENT_CASE)) { - if (mainFrame != null && mainFrame.isVisible()) { - mainFrame.dispose(); - mainFrame = null; - } - - data = null; - } - } - - private void clearMactimeData() { - // get rid of the old barData - data = null; - - // get rid of the mactime file - java.io.File mactimeFile = new java.io.File(moduleDir, mactimeFileName); - mactimeFile.delete(); - - // close the jframe - if (mainFrame != null) { - mainFrame.setVisible(false); - mainFrame.dispose(); - mainFrame = null; - } - - // remove ourself as change listener on Case - Case.removePropertyChangeListener(this); - listeningToAddImage = false; - - } - - /* - * The backbone of the timeline functionality, years are split into months, months into days, and days contain the events of that given day. - * All of those are Epochs. - */ - abstract class Epoch { - - abstract public int getNumFiles(); - } - - private class YearEpoch extends Epoch { - - private int year; - private List months = new ArrayList<>(); - - YearEpoch(int year) { - this.year = year; - } - - public int getYear() { - return year; - } - - @Override - public int getNumFiles() { - int size = 0; - for (MonthEpoch me : months) { - size += me.getNumFiles(); - } - return size; - } - - public MonthEpoch getMonth(int monthNum) { - MonthEpoch month = null; - for (MonthEpoch me : months) { - if (me.getMonthInt() == monthNum) { - month = me; - break; - } - } - return month; - } - - public void add(long fileId, int month, int day) { - // see if this month is in the list - MonthEpoch monthEpoch = null; - for (MonthEpoch me : months) { - if (me.getMonthInt() == month) { - monthEpoch = me; - break; - } - } - - if (monthEpoch == null) { - monthEpoch = new MonthEpoch(month); - months.add(monthEpoch); - } - - // add the file the the MonthEpoch object - monthEpoch.add(fileId, day); - } - } - - private class MonthEpoch extends Epoch { - - private int month; //Zero-indexed: June = 5, August = 7, etc - private List days = new ArrayList<>(); //List of DayEpochs in this month, max 31 - - MonthEpoch(int month) { - this.month = month; - } - - public int getMonthInt() { - return month; - } - - public int getTotalNumDays(int year) { - Calendar cal = Calendar.getInstance(); - cal.set(year, month, 1); - return cal.getActualMaximum(Calendar.DAY_OF_MONTH); - } - - @Override - public int getNumFiles() { - int numFiles = 0; - for (DayEpoch de : days) { - numFiles += de.getNumFiles(); - } - return numFiles; - } - - public DayEpoch getDay(int dayNum) { - DayEpoch de = null; - for (DayEpoch d : days) { - if (d.dayNum == dayNum) { - de = d; - break; - } - } - return de; - } - - public void add(long fileId, int day) { - DayEpoch dayEpoch = null; - for (DayEpoch de : days) { - if (de.getDayInt() == day) { - dayEpoch = de; - break; - } - } - - if (dayEpoch == null) { - dayEpoch = new DayEpoch(day); - days.add(dayEpoch); - } - - dayEpoch.add(fileId); - } - - /** - * Returns the month's name in String format, e.g., September, July, - */ - String getMonthName() { - return new DateFormatSymbols().getMonths()[month]; - } - - /** - * @return the list of days in this month - */ - List getDays() { - return this.days; - } - } - - private class DayEpoch extends Epoch { - - private final List fileIds = new ArrayList<>(); - int dayNum = 0; //Day of the month this Epoch represents, 1 indexed: 28=28. - - DayEpoch(int dayOfMonth) { - this.dayNum = dayOfMonth; - } - - public int getDayInt() { - return dayNum; - } - - @Override - public int getNumFiles() { - return fileIds.size(); - } - - public void add(long fileId) { - fileIds.add(fileId); - } - - List getEvents() { - return this.fileIds; - } - } - - // The node factories used to make lists of files to send to the result viewer - // using the lazy loading (rather than background) loading option to facilitate - // loading a huge number of nodes for the given day - private class FileNodeChildFactory extends Children.Keys { - - private List fileIds; - - FileNodeChildFactory(List fileIds) { - super(true); - this.fileIds = fileIds; - } - - @Override - protected void addNotify() { - super.addNotify(); - setKeys(fileIds); - } - - @Override - protected void removeNotify() { - super.removeNotify(); - setKeys(new ArrayList()); - } - - @Override - protected Node[] createNodes(Long t) { - return new Node[]{createNodeForKey(t)}; - } - - // @Override - // protected boolean createKeys(List list) { - // list.addAll(fileIds); - // return true; - // } - //@Override - protected Node createNodeForKey(Long fileId) { - AbstractFile af = null; - try { - af = skCase.getAbstractFileById(fileId); - } catch (TskCoreException ex) { - logger.log(Level.SEVERE, "Error getting file by id and creating a node in Timeline: " + fileId, ex); - //no node will be shown for this object - return null; - } - - Node wrapped; - if (af.isDir()) { - wrapped = new DirectoryNode(af, false); - } else { - wrapped = new FileNode(af, false); - } - return new FilterNodeLeaf(wrapped); - } - } - - private class FileRootNode extends DisplayableItemNode { - - FileRootNode(String NAME, List fileIds) { - //super(Children.create(new FileNodeChildFactory(fileIds), true)); - super(new FileNodeChildFactory(fileIds), Lookups.singleton(fileIds)); - super.setName(NAME); - super.setDisplayName(NAME); - } - - @Override - public DisplayableItemNode.TYPE getDisplayableItemNodeType() { - return DisplayableItemNode.TYPE.CONTENT; - } - - @Override - public T accept(DisplayableItemNodeVisitor v) { - return null; - } - } - - private List parseMacTime(java.io.File f) { - List years = new ArrayList<>(); - Scanner scan; - try { - scan = new Scanner(new FileInputStream(f)); - } catch (FileNotFoundException ex) { - logger.log(Level.SEVERE, "Error: could not find mactime file.", ex); - return years; - } - scan.useDelimiter(","); - scan.nextLine(); // skip the header line - - int prevYear = -1; - YearEpoch ye = null; - while (scan.hasNextLine()) { - String[] s = scan.nextLine().split(","); //1999-02-08T11:08:08Z, 78706, m..b, rrwxrwxrwx, 0, 0, 8355, /img... - String[] datetime = s[0].split("T"); //{1999-02-08, 11:08:08Z} - String[] date = datetime[0].split("-"); // {1999, 02, 08} - int year = Integer.valueOf(date[0]); - int month = Integer.valueOf(date[1]) - 1; //Months are zero indexed: 1 = February, 6 = July, 11 = December - int day = Integer.valueOf(date[2]); //Days are 1 indexed - long ObjId = Long.valueOf(s[4]); - - // when the year changes, create and add a new YearEpoch object to the list - if (year != prevYear) { - ye = new YearEpoch(year); - years.add(ye); - prevYear = year; - } - - if (ye != null) { - ye.add(ObjId, month, day); - } - } - - scan.close(); - - return years; - } - - /** - * Crate a body file and return its path or null if error - * - * @return absolute path string or null if error - */ - private String makeBodyFile() { - // Setup timestamp - DateFormat dateFormat = new SimpleDateFormat("MM-dd-yyyy-HH-mm-ss"); - Date date = new Date(); - String datenotime = dateFormat.format(date); - - final Case currentCase = Case.getCurrentCase(); - - // Get report path - String bodyFilePath = moduleDir.getAbsolutePath() - + java.io.File.separator + currentCase.getName() + "-" + datenotime + ".txt"; - - // Run query to get all files - final String filesAndDirs = "name != '.' " - + "AND name != '..'"; - List fileIds = null; - try { - fileIds = skCase.findAllFileIdsWhere(filesAndDirs); - } catch (TskCoreException ex) { - logger.log(Level.SEVERE, "Error querying image files to make a body file: " + bodyFilePath, ex); - return null; - } - - // Loop files and write info to report - FileWriter fileWriter = null; - try { - fileWriter = new FileWriter(bodyFilePath, true); - } catch (IOException ex) { - logger.log(Level.SEVERE, "Error creating output stream to write body file to: " + bodyFilePath, ex); - return null; - } - - BufferedWriter out = null; - try { - out = new BufferedWriter(fileWriter); - for (long fileId : fileIds) { - AbstractFile file = skCase.getAbstractFileById(fileId); - // try { - // MD5|name|inode|mode_as_string|ObjId|GID|size|atime|mtime|ctime|crtime - if (file.getMd5Hash() != null) { - out.write(file.getMd5Hash()); - } - out.write("|"); - String path = null; - try { - path = file.getUniquePath(); - } catch (TskCoreException e) { - logger.log(Level.SEVERE, "Failed to get the unique path of: " + file + " and writing body file.", e); - return null; - } - - out.write(path); - - out.write("|"); - out.write(Long.toString(file.getMetaAddr())); - out.write("|"); - String modeString = file.getModesAsString(); - if (modeString != null) { - out.write(modeString); - } - out.write("|"); - out.write(Long.toString(file.getId())); - out.write("|"); - out.write(Long.toString(file.getGid())); - out.write("|"); - out.write(Long.toString(file.getSize())); - out.write("|"); - out.write(Long.toString(file.getAtime())); - out.write("|"); - out.write(Long.toString(file.getMtime())); - out.write("|"); - out.write(Long.toString(file.getCtime())); - out.write("|"); - out.write(Long.toString(file.getCrtime())); - out.write("\n"); - } - } catch (TskCoreException ex) { - logger.log(Level.SEVERE, "Error querying file by id", ex); - return null; - - } catch (IOException ex) { - logger.log(Level.WARNING, "Error while trying to write data to the body file.", ex); - return null; - } finally { - if (out != null) { - try { - out.flush(); - out.close(); - } catch (IOException ex1) { - logger.log(Level.WARNING, "Could not flush and/or close body file.", ex1); - } - } - } - - - return bodyFilePath; - } - - private String makeMacTime(String pathToBodyFile) { - String cmdpath = ""; - String macpath = ""; - String[] mactimeArgs; - final String machome = macRoot.getAbsolutePath(); - pathToBodyFile = PlatformUtil.getOSFilePath(pathToBodyFile); - if (PlatformUtil.isWindowsOS()) { - macpath = machome + java.io.File.separator + "mactime.exe"; - cmdpath = PlatformUtil.getOSFilePath(macpath); - mactimeArgs = new String[]{"-b", pathToBodyFile, "-d", "-y"}; - } else { - cmdpath = "perl"; - macpath = machome + java.io.File.separator + "mactime.pl"; - mactimeArgs = new String[]{macpath, "-b", pathToBodyFile, "-d", "-y"}; - } - - String macfile = moduleDir.getAbsolutePath() + java.io.File.separator + mactimeFileName; - - - String output = ""; - ExecUtil execUtil = new ExecUtil(); - Writer writer = null; - try { - //JavaSystemCaller.Exec.execute("\"" + command + "\""); - writer = new FileWriter(macfile); - execUtil.execute(writer, cmdpath, mactimeArgs); - } catch (InterruptedException ie) { - logger.log(Level.WARNING, "Mactime process was interrupted by user", ie); - return null; - } catch (IOException ioe) { - logger.log(Level.SEVERE, "Could not create mactime file, encountered error ", ioe); - return null; - } finally { - if (writer != null) { - try { - writer.close(); - } catch (IOException ex) { - logger.log(Level.SEVERE, "Could not clsoe writer after creating mactime file, encountered error ", ex); - } - } - } - - return macfile; - } - - @Override - public boolean isEnabled() { - return Case.isCaseOpen() && this.fxInited; - } - - @Override - public void performAction() { - initTimeline(); - } - - private void initTimeline() { - if (!Case.existsCurrentCase()) { - return; - } - - final Case currentCase = Case.getCurrentCase(); - skCase = currentCase.getSleuthkitCase(); - - try { - if (currentCase.getRootObjectsCount() == 0) { - logger.log(Level.INFO, "Error creating timeline, there are no data sources. "); - } else { - - if (IngestManager.getDefault().isIngestRunning()) { - int answer = JOptionPane.showConfirmDialog(new JFrame(), - "You are trying to generate a timeline before " - + "ingest has been completed. The timeline may be " - + "incomplete. Do you want to continue?", "Timeline", - JOptionPane.YES_NO_OPTION); - if (answer != JOptionPane.YES_OPTION) { - return; - } - } - - logger.log(Level.INFO, "Beginning generation of timeline"); - - // if the timeline window is already open, bring to front and do nothing - if (mainFrame != null && mainFrame.isVisible()) { - mainFrame.toFront(); - return; - } - - // listen for case changes (specifically images being added). - if (Case.isCaseOpen() && !listeningToAddImage) { - Case.addPropertyChangeListener(this); - listeningToAddImage = true; - } - - // create the modal progressDialog - SwingUtilities.invokeLater(new Runnable() { - @Override - public void run() { - progressDialog = new TimelineProgressDialog(WindowManager.getDefault().getMainWindow(), true); - progressDialog.setVisible(true); - } - }); - - // initialize mactimeFileName - mactimeFileName = currentCase.getName() + "-MACTIME.txt"; - - // see if barData has been added to the database since the last - // time timeline ran - long objId = skCase.getLastObjectId(); - if (objId != lastObjectId && lastObjectId != -1) { - clearMactimeData(); - } - lastObjectId = objId; - - customize(); - } - } catch (TskCoreException ex) { - logger.log(Level.SEVERE, "Error when generating timeline, ", ex); - } catch (Exception ex) { - logger.log(Level.SEVERE, "Unexpected error when generating timeline, ", ex); - } - } - - @Override - public String getName() { - return "Make Timeline (Beta)"; - } - - @Override - public HelpCtx getHelpCtx() { - return HelpCtx.DEFAULT_HELP; - } - - @Override - public boolean asynchronous() { - return false; - } -} +/* + * Autopsy Forensic Browser + * + * Copyright 2013 Basis Technology Corp. + * Contact: carrier sleuthkit org + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.sleuthkit.autopsy.timeline; + +import java.awt.Component; +import java.awt.Cursor; +import java.awt.Dimension; +import java.awt.EventQueue; +import java.beans.PropertyChangeEvent; +import java.beans.PropertyChangeListener; +import java.io.BufferedWriter; +import java.io.FileInputStream; +import java.io.FileNotFoundException; +import java.io.FileWriter; +import java.io.IOException; +import java.io.Writer; +import java.text.DateFormat; +import java.text.DateFormatSymbols; +import java.text.ParseException; +import java.text.SimpleDateFormat; +import java.util.ArrayList; +import java.util.Calendar; +import java.util.Date; +import java.util.List; +import java.util.Locale; +import java.util.Scanner; +import java.util.Stack; +import java.util.logging.Level; +import javafx.application.Platform; +import javafx.beans.value.ChangeListener; +import javafx.beans.value.ObservableValue; +import javafx.collections.FXCollections; +import javafx.collections.ObservableList; +import javafx.embed.swing.JFXPanel; +import javafx.event.ActionEvent; +import javafx.event.EventHandler; +import javafx.geometry.Pos; +import javafx.scene.Group; +import javafx.scene.Scene; +import javafx.scene.chart.BarChart; +import javafx.scene.chart.CategoryAxis; +import javafx.scene.chart.NumberAxis; +import javafx.scene.control.Button; +import javafx.scene.control.ComboBox; +import javafx.scene.control.Label; +import javafx.scene.control.ScrollPane; +import javafx.scene.input.MouseButton; +import javafx.scene.input.MouseEvent; +import javafx.scene.layout.HBox; +import javafx.scene.layout.VBox; +import javafx.scene.paint.Color; +import javax.swing.JFrame; +import javax.swing.JOptionPane; +import javax.swing.SwingUtilities; +import org.netbeans.api.progress.ProgressHandle; +import org.netbeans.api.progress.ProgressHandleFactory; +import org.openide.awt.ActionID; +import org.openide.awt.ActionReference; +import org.openide.awt.ActionReferences; +import org.openide.awt.ActionRegistration; +import org.openide.modules.InstalledFileLocator; +import org.openide.modules.ModuleInstall; +import org.openide.nodes.Children; +import org.openide.nodes.Node; +import org.openide.util.HelpCtx; +import org.openide.util.NbBundle; +import org.openide.util.actions.CallableSystemAction; +import org.openide.util.actions.Presenter; +import org.openide.util.lookup.Lookups; +import org.openide.windows.WindowManager; +import org.sleuthkit.autopsy.casemodule.Case; +import org.sleuthkit.autopsy.core.Installer; +import org.sleuthkit.autopsy.corecomponents.DataContentPanel; +import org.sleuthkit.autopsy.corecomponents.DataResultPanel; +import org.sleuthkit.autopsy.coreutils.Logger; +import org.sleuthkit.autopsy.coreutils.PlatformUtil; +import org.sleuthkit.autopsy.datamodel.FilterNodeLeaf; +import org.sleuthkit.autopsy.datamodel.DirectoryNode; +import org.sleuthkit.autopsy.datamodel.DisplayableItemNode; +import org.sleuthkit.autopsy.datamodel.DisplayableItemNodeVisitor; +import org.sleuthkit.autopsy.datamodel.FileNode; +import org.sleuthkit.autopsy.ingest.IngestManager; +import org.sleuthkit.autopsy.coreutils.ExecUtil; +import org.sleuthkit.datamodel.AbstractFile; +import org.sleuthkit.datamodel.SleuthkitCase; +import org.sleuthkit.datamodel.TskCoreException; + +@ActionID(category = "Tools", id = "org.sleuthkit.autopsy.timeline.Timeline") +@ActionRegistration(displayName = "#CTL_MakeTimeline", lazy = false) +@ActionReferences(value = { + @ActionReference(path = "Menu/Tools", position = 100)}) +@NbBundle.Messages(value = "CTL_TimelineView=Generate Timeline") +/** + * The Timeline Action entry point. Collects data and pushes data to javafx + * widgets + * + */ +public class Timeline extends CallableSystemAction implements Presenter.Toolbar, PropertyChangeListener { + + private static final Logger logger = Logger.getLogger(Timeline.class.getName()); + private final java.io.File macRoot = InstalledFileLocator.getDefault().locate("mactime", Timeline.class.getPackage().getName(), false); + private TimelineFrame mainFrame; //frame for holding all the elements + private Group fxGroupCharts; //Orders the charts + private Scene fxSceneCharts; //Displays the charts + private HBox fxHBoxCharts; //Holds the navigation buttons in horiztonal fashion. + private VBox fxVBox; //Holds the JavaFX Elements in vertical fashion. + private JFXPanel fxPanelCharts; //FX panel to hold the group + private BarChart fxChartEvents; //Yearly/Monthly events - Bar chart + private ScrollPane fxScrollEvents; //Scroll Panes for dealing with oversized an oversized chart + private static final int FRAME_HEIGHT = 700; //Sizing constants + private static final int FRAME_WIDTH = 1200; + private Button fxZoomOutButton; //Navigation buttons + private ComboBox fxDropdownSelectYears; //Dropdown box for selecting years. Useful when the charts' scale means some years are unclickable, despite having events. + private final Stack> fxStackPrevCharts = new Stack>(); //Stack for storing drill-up information. + private BarChart fxChartTopLevel; //the topmost chart, used for resetting to default view. + private DataResultPanel dataResultPanel; + private DataContentPanel dataContentPanel; + private ProgressHandle progress; + private java.io.File moduleDir; + private String mactimeFileName; + private List data; + private boolean listeningToAddImage = false; + private long lastObjectId = -1; + private TimelineProgressDialog progressDialog; + private EventHandler fxMouseEnteredListener; + private EventHandler fxMouseExitedListener; + private SleuthkitCase skCase; + private boolean fxInited = false; + + public Timeline() { + super(); + + fxInited = Installer.isJavaFxInited(); + + } + + //Swing components and JavafX components don't play super well together + //Swing components need to be initialized first, in the swing specific thread + //Next, the javafx components may be initialized. + private void customize() { + + //listeners + fxMouseEnteredListener = new EventHandler() { + @Override + public void handle(MouseEvent e) { + fxPanelCharts.setCursor(Cursor.getPredefinedCursor(Cursor.HAND_CURSOR)); + } + }; + fxMouseExitedListener = new EventHandler() { + @Override + public void handle(MouseEvent e) { + fxPanelCharts.setCursor(null); + } + }; + + SwingUtilities.invokeLater(new Runnable() { + @Override + public void run() { + //Making the main frame * + + mainFrame = new TimelineFrame(); + mainFrame.setFrameName(Case.getCurrentCase().getName() + " - Autopsy Timeline (Beta)"); + + //use the same icon on jframe as main application + mainFrame.setIconImage(WindowManager.getDefault().getMainWindow().getIconImage()); + mainFrame.setFrameSize(new Dimension(FRAME_WIDTH, FRAME_HEIGHT)); //(Width, Height) + + + dataContentPanel = DataContentPanel.createInstance(); + //dataContentPanel.setAlignmentX(Component.RIGHT_ALIGNMENT); + //dataContentPanel.setPreferredSize(new Dimension(FRAME_WIDTH, (int) (FRAME_HEIGHT * 0.4))); + + dataResultPanel = DataResultPanel.createInstance("Timeline Results", "", Node.EMPTY, 0, dataContentPanel); + dataResultPanel.setContentViewer(dataContentPanel); + //dataResultPanel.setAlignmentX(Component.LEFT_ALIGNMENT); + //dataResultPanel.setPreferredSize(new Dimension((int)(FRAME_WIDTH * 0.5), (int) (FRAME_HEIGHT * 0.5))); + logger.log(Level.INFO, "Successfully created viewers"); + + mainFrame.setBottomLeftPanel(dataResultPanel); + mainFrame.setBottomRightPanel(dataContentPanel); + + runJavaFxThread(); + } + }); + + + } + + private void runJavaFxThread() { + //JavaFX thread + //JavaFX components MUST be run in the JavaFX thread, otherwise massive amounts of exceptions will be thrown and caught. Liable to freeze up and crash. + //Components can be declared whenever, but initialization and manipulation must take place here. + Platform.runLater(new Runnable() { + @Override + public void run() { + try { + // start the progress bar + progress = ProgressHandleFactory.createHandle("Creating timeline . . ."); + progress.start(); + + fxChartEvents = null; //important to reset old data + fxPanelCharts = new JFXPanel(); + fxGroupCharts = new Group(); + fxSceneCharts = new Scene(fxGroupCharts, FRAME_WIDTH, FRAME_HEIGHT * 0.6); //Width, Height + fxVBox = new VBox(5); + fxVBox.setAlignment(Pos.BOTTOM_CENTER); + fxHBoxCharts = new HBox(10); + fxHBoxCharts.setAlignment(Pos.BOTTOM_CENTER); + + //Initializing default values for the scroll pane + fxScrollEvents = new ScrollPane(); + fxScrollEvents.setPrefSize(FRAME_WIDTH, FRAME_HEIGHT * 0.6); //Width, Height + fxScrollEvents.setContent(null); //Needs some content, otherwise it crashes + + // set up moduleDir + moduleDir = new java.io.File(Case.getCurrentCase().getModulesOutputDirAbsPath() + java.io.File.separator + "timeline"); + if (!moduleDir.exists()) { + moduleDir.mkdir(); + } + + int currentProgress = 0; + java.io.File mactimeFile = new java.io.File(moduleDir, mactimeFileName); + if (!mactimeFile.exists()) { + progressDialog.setProgressTotal(3); //total 3 units + logger.log(Level.INFO, "Creating body file"); + progressDialog.updateProgressBar("Generating Bodyfile"); + String bodyFilePath = makeBodyFile(); + progressDialog.updateProgressBar(++currentProgress); + logger.log(Level.INFO, "Creating mactime file: " + mactimeFile.getAbsolutePath()); + progressDialog.updateProgressBar("Generating Mactime"); + makeMacTime(bodyFilePath); + progressDialog.updateProgressBar(++currentProgress); + data = null; + } else { + progressDialog.setProgressTotal(1); //total 1 units + logger.log(Level.INFO, "Mactime file already exists; parsing that: " + mactimeFile.getAbsolutePath()); + } + + + progressDialog.updateProgressBar("Parsing Mactime"); + if (data == null) { + logger.log(Level.INFO, "Parsing mactime file: " + mactimeFile.getAbsolutePath()); + data = parseMacTime(mactimeFile); //The sum total of the mactime parsing. YearEpochs contain everything you need to make a timeline. + } + progressDialog.updateProgressBar(++currentProgress); + + //Making a dropdown box to select years. + List lsi = new ArrayList(); //List is in the format of {Year : Number of Events}, used for selecting from the dropdown. + for (YearEpoch ye : data) { + lsi.add(ye.year + " : " + ye.getNumFiles()); + } + ObservableList listSelect = FXCollections.observableArrayList(lsi); + fxDropdownSelectYears = new ComboBox(listSelect); + + //Buttons for navigating up and down the timeline + fxZoomOutButton = new Button("Zoom Out"); + fxZoomOutButton.setOnAction(new EventHandler() { + @Override + public void handle(ActionEvent e) { + BarChart bc; + if (fxStackPrevCharts.size() == 0) { + bc = fxChartTopLevel; + } else { + bc = fxStackPrevCharts.pop(); + } + fxChartEvents = bc; + fxScrollEvents.setContent(fxChartEvents); + } + }); + + fxDropdownSelectYears.getSelectionModel().selectedItemProperty().addListener(new ChangeListener() { + @Override + public void changed(ObservableValue ov, String t, String t1) { + if (fxDropdownSelectYears.getValue() != null) { + mainFrame.setTopComponentCursor(Cursor.getPredefinedCursor(Cursor.WAIT_CURSOR)); + try { + fxChartEvents = createMonthsWithDrill(findYear(data, Integer.valueOf(fxDropdownSelectYears.getValue().split(" ")[0]))); + fxScrollEvents.setContent(fxChartEvents); + } finally { + mainFrame.setTopComponentCursor(null); + } + } + } + }); + + //Adding things to the V and H boxes. + //hBox_Charts stores the pseudo menu bar at the top of the timeline. |Zoom Out|View Year: [Select Year]|â–º| + fxHBoxCharts.getChildren().addAll(fxZoomOutButton, new Label("Go To:"), fxDropdownSelectYears); + fxVBox.getChildren().addAll(fxHBoxCharts, fxScrollEvents); //FxBox_V holds things in a visual stack. + fxGroupCharts.getChildren().add(fxVBox); //Adding the FxBox to the group. Groups make things easier to manipulate without having to update a hundred things every change. + fxPanelCharts.setScene(fxSceneCharts); + + + fxPanelCharts.setAlignmentX(Component.LEFT_ALIGNMENT); + + fxChartTopLevel = createYearChartWithDrill(data); + fxChartEvents = fxChartTopLevel; + fxScrollEvents.setContent(fxChartEvents); + + EventQueue.invokeLater(new Runnable() { + @Override + public void run() { + mainFrame.setTopPanel(fxPanelCharts); + dataResultPanel.open(); + //mainFrame.pack(); + mainFrame.setVisible(true); + } + }); + } finally { + // stop the progress bar + progress.finish(); + + // close the progressDialog + progressDialog.doClose(0); + } + } + }); + } + + /** + * Creates a BarChart with datapoints for all the years from the parsed + * mactime file. + * + * @param allYears The list of years that have barData from the mactime file + * @return BarChart scaled to the year level + */ + private BarChart createYearChartWithDrill(final List allYears) { + final CategoryAxis xAxis = new CategoryAxis(); //Axes are very specific types. Categorys are strings. + final NumberAxis yAxis = new NumberAxis(); + final Label l = new Label(""); + l.setStyle("-fx-font: 24 arial;"); + l.setTextFill(Color.AZURE); + xAxis.setLabel("Years"); + yAxis.setLabel("Number of Events"); + //Charts are made up of individual pieces of Chart.Data. In this case, a piece of barData is a single bar on the graph. + //Data is packaged into a series, which can be assigned custom colors or styling + //After the series are created, 1 or more series are packaged into a single chart. + ObservableList> bcData = FXCollections.observableArrayList(); + BarChart.Series se = new BarChart.Series(); + if (allYears != null) { + for (final YearEpoch ye : allYears) { + se.getData().add(new BarChart.Data(String.valueOf(ye.year), ye.getNumFiles())); + } + } + bcData.add(se); + + + //Note: + // BarChart.Data wraps the Java Nodes class. BUT, until a BarChart.Data gets added to an actual series, it's node is null, and you can perform no operations on it. + // When the Data is added to a series(or a chart? I am unclear on where), a node is automaticaly generated for it, after which you can perform any of the operations it offers. + // In addtion, you are free to set the node to whatever you want. It wraps the most generic Node class. + // But it is for this reason that the chart generating functions have two forloops. I do not believe they can be condensed into a single loop due to the nodes being null until + // an undetermined point in time. + BarChart bc = new BarChart(xAxis, yAxis, bcData); + for (final BarChart.Data barData : bc.getData().get(0).getData()) { //.get(0) refers to the BarChart.Series class to work on. There is only one series in this graph, so get(0) is safe. + barData.getNode().setScaleX(.5); + + final javafx.scene.Node barNode = barData.getNode(); + //hover listener + barNode.addEventHandler(MouseEvent.MOUSE_ENTERED_TARGET, fxMouseEnteredListener); + barNode.addEventHandler(MouseEvent.MOUSE_EXITED_TARGET, fxMouseExitedListener); + + //click listener + barNode.addEventHandler(MouseEvent.MOUSE_CLICKED, + new EventHandler() { + @Override + public void handle(MouseEvent e) { + if (e.getButton().equals(MouseButton.PRIMARY)) { + if (e.getClickCount() == 1) { + Platform.runLater(new Runnable() { + @Override + public void run() { + BarChart b = + createMonthsWithDrill(findYear(allYears, Integer.valueOf(barData.getXValue()))); + fxChartEvents = b; + fxScrollEvents.setContent(fxChartEvents); + } + }); + + } + } + } + }); + } + + bc.autosize(); //Get an auto height + bc.setPrefWidth(FRAME_WIDTH); //but override the width + bc.setLegendVisible(false); //The legend adds too much extra chart space, it's not necessary. + return bc; + } + + /* + * Displays a chart with events from one year only, separated into 1-month chunks. + * Always 12 per year, empty months are represented by no bar. + */ + private BarChart createMonthsWithDrill(final YearEpoch ye) { + + final CategoryAxis xAxis = new CategoryAxis(); + final NumberAxis yAxis = new NumberAxis(); + xAxis.setLabel("Month (" + ye.year + ")"); + yAxis.setLabel("Number of Events"); + ObservableList> bcData = FXCollections.observableArrayList(); + + BarChart.Series se = new BarChart.Series(); + for (int monthNum = 0; monthNum < 12; ++monthNum) { + String monthName = new DateFormatSymbols().getMonths()[monthNum]; + MonthEpoch month = ye.getMonth(monthNum); + int numEvents = month == null ? 0 : month.getNumFiles(); + se.getData().add(new BarChart.Data(monthName, numEvents)); //Adding new barData at {X-pos, Y-Pos} + } + bcData.add(se); + final BarChart bc = new BarChart(xAxis, yAxis, bcData); + + for (int i = 0; i < 12; i++) { + for (final BarChart.Data barData : bc.getData().get(0).getData()) { + //Note: + // All the charts of this package have a problem where when the chart gets below a certain pixel ratio, the barData stops drawing. The axes and the labels remain, + // But the actual chart barData is invisible, unclickable, and unrendered. To partially compensate for that, barData.getNode() can be manually scaled up to increase visibility. + // Sometimes I've had it jacked up to as much as x2400 just to see a sliver of information. + // But that doesn't work all the time. Adding it to a scrollpane and letting the user scroll up and down to view the chart is the other workaround. Both of these fixes suck. + final javafx.scene.Node barNode = barData.getNode(); + barNode.setScaleX(.5); + + //hover listener + barNode.addEventHandler(MouseEvent.MOUSE_ENTERED_TARGET, fxMouseEnteredListener); + barNode.addEventHandler(MouseEvent.MOUSE_EXITED_TARGET, fxMouseExitedListener); + + //clicks + barNode.addEventHandler(MouseEvent.MOUSE_PRESSED, + new EventHandler() { + @Override + public void handle(MouseEvent e) { + if (e.getButton().equals(MouseButton.PRIMARY)) { + if (e.getClickCount() == 1) { + Platform.runLater(new Runnable() { + @Override + public void run() { + fxChartEvents = createEventsByMonth(findMonth(ye.months, monthStringToInt(barData.getXValue())), ye); + fxScrollEvents.setContent(fxChartEvents); + } + }); + } + } + } + }); + } + } + + bc.autosize(); + bc.setPrefWidth(FRAME_WIDTH); + bc.setLegendVisible(false); + fxStackPrevCharts.push(bc); + return bc; + } + + + /* + * Displays a chart with events from one month only. + * Up to 31 days per month, as low as 28 as determined by the specific MonthEpoch + */ + private BarChart createEventsByMonth(final MonthEpoch me, final YearEpoch ye) { + final CategoryAxis xAxis = new CategoryAxis(); + final NumberAxis yAxis = new NumberAxis(); + xAxis.setLabel("Day of Month"); + yAxis.setLabel("Number of Events"); + ObservableList> bcData = makeObservableListByMonthAllDays(me, ye.getYear()); + BarChart.Series series = new BarChart.Series(bcData); + series.setName(me.getMonthName() + " " + ye.getYear()); + + + ObservableList> ol = + FXCollections.>observableArrayList(series); + + final BarChart bc = new BarChart(xAxis, yAxis, ol); + for (final BarChart.Data barData : bc.getData().get(0).getData()) { + //data.getNode().setScaleX(2); + + final javafx.scene.Node barNode = barData.getNode(); + + //hover listener + barNode.addEventHandler(MouseEvent.MOUSE_ENTERED_TARGET, fxMouseEnteredListener); + barNode.addEventHandler(MouseEvent.MOUSE_EXITED_TARGET, fxMouseExitedListener); + + barNode.addEventHandler(MouseEvent.MOUSE_PRESSED, + new EventHandler() { + MonthEpoch myme = me; + + @Override + public void handle(MouseEvent e) { + SwingUtilities.invokeLater(new Runnable() { + @Override + public void run() { + //reset the view and free the current nodes before loading new ones + final FileRootNode d = new FileRootNode("Empty Root", new ArrayList()); + dataResultPanel.setNode(d); + dataResultPanel.setPath("Loading..."); + } + }); + final int day = (Integer.valueOf((barData.getXValue()).split("-")[1])); + final DayEpoch de = myme.getDay(day); + final List afs; + if (de != null) { + afs = de.getEvents(); + } else { + logger.log(Level.SEVERE, "There were no events for the clicked-on day: " + day); + return; + } + + SwingUtilities.invokeLater(new Runnable() { + @Override + public void run() { + final FileRootNode d = new FileRootNode("Root", afs); + dataResultPanel.setNode(d); + //set result viewer title path with the current date + String dateString = ye.getYear() + "-" + (1 + me.getMonthInt()) + "-" + +de.dayNum; + dataResultPanel.setPath(dateString); + } + }); + + + } + }); + } + bc.autosize(); + bc.setPrefWidth(FRAME_WIDTH); + return bc; + } + + private static ObservableList> makeObservableListByMonthAllDays(final MonthEpoch me, int year) { + ObservableList> bcData = FXCollections.observableArrayList(); + int totalDays = me.getTotalNumDays(year); + for (int i = 1; i <= totalDays; ++i) { + DayEpoch day = me.getDay(i); + int numFiles = day == null ? 0 : day.getNumFiles(); + BarChart.Data d = new BarChart.Data(me.month + 1 + "-" + i, numFiles); + d.setExtraValue(me); + bcData.add(d); + } + return bcData; + } + + /* + * Section for Utility functions + */ + /** + * + * @param mon The month to convert. Must be minimum 4 characters long + * "February" and "Febr" are acceptable. + * @return The integer value of the month. February = 1, July = 6 + */ + private static int monthStringToInt(String mon) { + try { + Date date = new SimpleDateFormat("MMMM", Locale.ENGLISH).parse(mon); + Calendar cal = Calendar.getInstance(); + cal.setTime(date); + return cal.get(Calendar.MONTH); + } catch (ParseException ex) { + logger.log(Level.WARNING, "Unable to convert string " + mon + " to integer", ex); + return -1; + } + } + + /** + * Used for finding the proper month in a list of available months + * + * @param lst The list of months to search through. It is assumed that the + * desired match is in this list. + * @param match The month, in integer format, to retrieve. + * @return The month epoch as specified by match. + */ + private static MonthEpoch findMonth(List lst, int match) { + for (MonthEpoch e : lst) { + if (e.month == match) { + return e; + } + } + return null; + } + + /** + * Used for finding the proper year in a list of available years + * + * @param lst The list of years to search through. It is assumed that the + * desired match is in this list. + * @param match The year to retrieve. + * @return The year epoch as specified by match. + */ + private static YearEpoch findYear(List lst, int match) { + for (YearEpoch e : lst) { + if (e.year == match) { + return e; + } + } + return null; + } + + @Override + public void propertyChange(PropertyChangeEvent evt) { + String prop = evt.getPropertyName(); + if (prop.equals(Case.CASE_ADD_DATA_SOURCE)) { + if (mainFrame != null && !mainFrame.isVisible()) { + // change the lastObjectId to trigger a reparse of mactime barData + ++lastObjectId; + return; + } + + int answer = JOptionPane.showConfirmDialog(mainFrame, "Timeline is out of date. Would you like to regenerate it?", "Select an option", JOptionPane.YES_NO_OPTION); + if (answer != JOptionPane.YES_OPTION) { + return; + } + + clearMactimeData(); + + // call performAction as if the user selected 'Make Timeline' from the menu + performAction(); + } else if (prop.equals(Case.CASE_CURRENT_CASE)) { + if (mainFrame != null && mainFrame.isVisible()) { + mainFrame.dispose(); + mainFrame = null; + } + + data = null; + } + } + + private void clearMactimeData() { + // get rid of the old barData + data = null; + + // get rid of the mactime file + java.io.File mactimeFile = new java.io.File(moduleDir, mactimeFileName); + mactimeFile.delete(); + + // close the jframe + if (mainFrame != null) { + mainFrame.setVisible(false); + mainFrame.dispose(); + mainFrame = null; + } + + // remove ourself as change listener on Case + Case.removePropertyChangeListener(this); + listeningToAddImage = false; + + } + + /* + * The backbone of the timeline functionality, years are split into months, months into days, and days contain the events of that given day. + * All of those are Epochs. + */ + abstract class Epoch { + + abstract public int getNumFiles(); + } + + private class YearEpoch extends Epoch { + + private int year; + private List months = new ArrayList<>(); + + YearEpoch(int year) { + this.year = year; + } + + public int getYear() { + return year; + } + + @Override + public int getNumFiles() { + int size = 0; + for (MonthEpoch me : months) { + size += me.getNumFiles(); + } + return size; + } + + public MonthEpoch getMonth(int monthNum) { + MonthEpoch month = null; + for (MonthEpoch me : months) { + if (me.getMonthInt() == monthNum) { + month = me; + break; + } + } + return month; + } + + public void add(long fileId, int month, int day) { + // see if this month is in the list + MonthEpoch monthEpoch = null; + for (MonthEpoch me : months) { + if (me.getMonthInt() == month) { + monthEpoch = me; + break; + } + } + + if (monthEpoch == null) { + monthEpoch = new MonthEpoch(month); + months.add(monthEpoch); + } + + // add the file the the MonthEpoch object + monthEpoch.add(fileId, day); + } + } + + private class MonthEpoch extends Epoch { + + private int month; //Zero-indexed: June = 5, August = 7, etc + private List days = new ArrayList<>(); //List of DayEpochs in this month, max 31 + + MonthEpoch(int month) { + this.month = month; + } + + public int getMonthInt() { + return month; + } + + public int getTotalNumDays(int year) { + Calendar cal = Calendar.getInstance(); + cal.set(year, month, 1); + return cal.getActualMaximum(Calendar.DAY_OF_MONTH); + } + + @Override + public int getNumFiles() { + int numFiles = 0; + for (DayEpoch de : days) { + numFiles += de.getNumFiles(); + } + return numFiles; + } + + public DayEpoch getDay(int dayNum) { + DayEpoch de = null; + for (DayEpoch d : days) { + if (d.dayNum == dayNum) { + de = d; + break; + } + } + return de; + } + + public void add(long fileId, int day) { + DayEpoch dayEpoch = null; + for (DayEpoch de : days) { + if (de.getDayInt() == day) { + dayEpoch = de; + break; + } + } + + if (dayEpoch == null) { + dayEpoch = new DayEpoch(day); + days.add(dayEpoch); + } + + dayEpoch.add(fileId); + } + + /** + * Returns the month's name in String format, e.g., September, July, + */ + String getMonthName() { + return new DateFormatSymbols().getMonths()[month]; + } + + /** + * @return the list of days in this month + */ + List getDays() { + return this.days; + } + } + + private class DayEpoch extends Epoch { + + private final List fileIds = new ArrayList<>(); + int dayNum = 0; //Day of the month this Epoch represents, 1 indexed: 28=28. + + DayEpoch(int dayOfMonth) { + this.dayNum = dayOfMonth; + } + + public int getDayInt() { + return dayNum; + } + + @Override + public int getNumFiles() { + return fileIds.size(); + } + + public void add(long fileId) { + fileIds.add(fileId); + } + + List getEvents() { + return this.fileIds; + } + } + + // The node factories used to make lists of files to send to the result viewer + // using the lazy loading (rather than background) loading option to facilitate + // loading a huge number of nodes for the given day + private class FileNodeChildFactory extends Children.Keys { + + private List fileIds; + + FileNodeChildFactory(List fileIds) { + super(true); + this.fileIds = fileIds; + } + + @Override + protected void addNotify() { + super.addNotify(); + setKeys(fileIds); + } + + @Override + protected void removeNotify() { + super.removeNotify(); + setKeys(new ArrayList()); + } + + @Override + protected Node[] createNodes(Long t) { + return new Node[]{createNodeForKey(t)}; + } + + // @Override + // protected boolean createKeys(List list) { + // list.addAll(fileIds); + // return true; + // } + //@Override + protected Node createNodeForKey(Long fileId) { + AbstractFile af = null; + try { + af = skCase.getAbstractFileById(fileId); + } catch (TskCoreException ex) { + logger.log(Level.SEVERE, "Error getting file by id and creating a node in Timeline: " + fileId, ex); + //no node will be shown for this object + return null; + } + + Node wrapped; + if (af.isDir()) { + wrapped = new DirectoryNode(af, false); + } else { + wrapped = new FileNode(af, false); + } + return new FilterNodeLeaf(wrapped); + } + } + + private class FileRootNode extends DisplayableItemNode { + + FileRootNode(String NAME, List fileIds) { + //super(Children.create(new FileNodeChildFactory(fileIds), true)); + super(new FileNodeChildFactory(fileIds), Lookups.singleton(fileIds)); + super.setName(NAME); + super.setDisplayName(NAME); + } + + @Override + public boolean isLeafTypeNode() { + return false; + } + + @Override + public T accept(DisplayableItemNodeVisitor v) { + return null; + } + } + + private List parseMacTime(java.io.File f) { + List years = new ArrayList<>(); + Scanner scan; + try { + scan = new Scanner(new FileInputStream(f)); + } catch (FileNotFoundException ex) { + logger.log(Level.SEVERE, "Error: could not find mactime file.", ex); + return years; + } + scan.useDelimiter(","); + scan.nextLine(); // skip the header line + + int prevYear = -1; + YearEpoch ye = null; + while (scan.hasNextLine()) { + String[] s = scan.nextLine().split(","); //1999-02-08T11:08:08Z, 78706, m..b, rrwxrwxrwx, 0, 0, 8355, /img... + String[] datetime = s[0].split("T"); //{1999-02-08, 11:08:08Z} + String[] date = datetime[0].split("-"); // {1999, 02, 08} + int year = Integer.valueOf(date[0]); + int month = Integer.valueOf(date[1]) - 1; //Months are zero indexed: 1 = February, 6 = July, 11 = December + int day = Integer.valueOf(date[2]); //Days are 1 indexed + long ObjId = Long.valueOf(s[4]); + + // when the year changes, create and add a new YearEpoch object to the list + if (year != prevYear) { + ye = new YearEpoch(year); + years.add(ye); + prevYear = year; + } + + if (ye != null) { + ye.add(ObjId, month, day); + } + } + + scan.close(); + + return years; + } + + /** + * Crate a body file and return its path or null if error + * + * @return absolute path string or null if error + */ + private String makeBodyFile() { + // Setup timestamp + DateFormat dateFormat = new SimpleDateFormat("MM-dd-yyyy-HH-mm-ss"); + Date date = new Date(); + String datenotime = dateFormat.format(date); + + final Case currentCase = Case.getCurrentCase(); + + // Get report path + String bodyFilePath = moduleDir.getAbsolutePath() + + java.io.File.separator + currentCase.getName() + "-" + datenotime + ".txt"; + + // Run query to get all files + final String filesAndDirs = "name != '.' " + + "AND name != '..'"; + List fileIds = null; + try { + fileIds = skCase.findAllFileIdsWhere(filesAndDirs); + } catch (TskCoreException ex) { + logger.log(Level.SEVERE, "Error querying image files to make a body file: " + bodyFilePath, ex); + return null; + } + + // Loop files and write info to report + FileWriter fileWriter = null; + try { + fileWriter = new FileWriter(bodyFilePath, true); + } catch (IOException ex) { + logger.log(Level.SEVERE, "Error creating output stream to write body file to: " + bodyFilePath, ex); + return null; + } + + BufferedWriter out = null; + try { + out = new BufferedWriter(fileWriter); + for (long fileId : fileIds) { + AbstractFile file = skCase.getAbstractFileById(fileId); + // try { + // MD5|name|inode|mode_as_string|ObjId|GID|size|atime|mtime|ctime|crtime + if (file.getMd5Hash() != null) { + out.write(file.getMd5Hash()); + } + out.write("|"); + String path = null; + try { + path = file.getUniquePath(); + } catch (TskCoreException e) { + logger.log(Level.SEVERE, "Failed to get the unique path of: " + file + " and writing body file.", e); + return null; + } + + out.write(path); + + out.write("|"); + out.write(Long.toString(file.getMetaAddr())); + out.write("|"); + String modeString = file.getModesAsString(); + if (modeString != null) { + out.write(modeString); + } + out.write("|"); + out.write(Long.toString(file.getId())); + out.write("|"); + out.write(Long.toString(file.getGid())); + out.write("|"); + out.write(Long.toString(file.getSize())); + out.write("|"); + out.write(Long.toString(file.getAtime())); + out.write("|"); + out.write(Long.toString(file.getMtime())); + out.write("|"); + out.write(Long.toString(file.getCtime())); + out.write("|"); + out.write(Long.toString(file.getCrtime())); + out.write("\n"); + } + } catch (TskCoreException ex) { + logger.log(Level.SEVERE, "Error querying file by id", ex); + return null; + + } catch (IOException ex) { + logger.log(Level.WARNING, "Error while trying to write data to the body file.", ex); + return null; + } finally { + if (out != null) { + try { + out.flush(); + out.close(); + } catch (IOException ex1) { + logger.log(Level.WARNING, "Could not flush and/or close body file.", ex1); + } + } + } + + + return bodyFilePath; + } + + private String makeMacTime(String pathToBodyFile) { + String cmdpath = ""; + String macpath = ""; + String[] mactimeArgs; + final String machome = macRoot.getAbsolutePath(); + pathToBodyFile = PlatformUtil.getOSFilePath(pathToBodyFile); + if (PlatformUtil.isWindowsOS()) { + macpath = machome + java.io.File.separator + "mactime.exe"; + cmdpath = PlatformUtil.getOSFilePath(macpath); + mactimeArgs = new String[]{"-b", pathToBodyFile, "-d", "-y"}; + } else { + cmdpath = "perl"; + macpath = machome + java.io.File.separator + "mactime.pl"; + mactimeArgs = new String[]{macpath, "-b", pathToBodyFile, "-d", "-y"}; + } + + String macfile = moduleDir.getAbsolutePath() + java.io.File.separator + mactimeFileName; + + + String output = ""; + ExecUtil execUtil = new ExecUtil(); + Writer writer = null; + try { + //JavaSystemCaller.Exec.execute("\"" + command + "\""); + writer = new FileWriter(macfile); + execUtil.execute(writer, cmdpath, mactimeArgs); + } catch (InterruptedException ie) { + logger.log(Level.WARNING, "Mactime process was interrupted by user", ie); + return null; + } catch (IOException ioe) { + logger.log(Level.SEVERE, "Could not create mactime file, encountered error ", ioe); + return null; + } finally { + if (writer != null) { + try { + writer.close(); + } catch (IOException ex) { + logger.log(Level.SEVERE, "Could not clsoe writer after creating mactime file, encountered error ", ex); + } + } + } + + return macfile; + } + + @Override + public boolean isEnabled() { + return Case.isCaseOpen() && this.fxInited; + } + + @Override + public void performAction() { + initTimeline(); + } + + private void initTimeline() { + if (!Case.existsCurrentCase()) { + return; + } + + final Case currentCase = Case.getCurrentCase(); + skCase = currentCase.getSleuthkitCase(); + + try { + if (currentCase.getRootObjectsCount() == 0) { + logger.log(Level.INFO, "Error creating timeline, there are no data sources. "); + } else { + + if (IngestManager.getDefault().isIngestRunning()) { + int answer = JOptionPane.showConfirmDialog(new JFrame(), + "You are trying to generate a timeline before " + + "ingest has been completed. The timeline may be " + + "incomplete. Do you want to continue?", "Timeline", + JOptionPane.YES_NO_OPTION); + if (answer != JOptionPane.YES_OPTION) { + return; + } + } + + logger.log(Level.INFO, "Beginning generation of timeline"); + + // if the timeline window is already open, bring to front and do nothing + if (mainFrame != null && mainFrame.isVisible()) { + mainFrame.toFront(); + return; + } + + // listen for case changes (specifically images being added). + if (Case.isCaseOpen() && !listeningToAddImage) { + Case.addPropertyChangeListener(this); + listeningToAddImage = true; + } + + // create the modal progressDialog + SwingUtilities.invokeLater(new Runnable() { + @Override + public void run() { + progressDialog = new TimelineProgressDialog(WindowManager.getDefault().getMainWindow(), true); + progressDialog.setVisible(true); + } + }); + + // initialize mactimeFileName + mactimeFileName = currentCase.getName() + "-MACTIME.txt"; + + // see if barData has been added to the database since the last + // time timeline ran + long objId = skCase.getLastObjectId(); + if (objId != lastObjectId && lastObjectId != -1) { + clearMactimeData(); + } + lastObjectId = objId; + + customize(); + } + } catch (TskCoreException ex) { + logger.log(Level.SEVERE, "Error when generating timeline, ", ex); + } catch (Exception ex) { + logger.log(Level.SEVERE, "Unexpected error when generating timeline, ", ex); + } + } + + @Override + public String getName() { + return "Make Timeline (Beta)"; + } + + @Override + public HelpCtx getHelpCtx() { + return HelpCtx.DEFAULT_HELP; + } + + @Override + public boolean asynchronous() { + return false; + } +} diff --git a/branding/core/core.jar/org/netbeans/core/startup/Bundle.properties b/branding/core/core.jar/org/netbeans/core/startup/Bundle.properties index 4744f62146..c79e98565e 100644 --- a/branding/core/core.jar/org/netbeans/core/startup/Bundle.properties +++ b/branding/core/core.jar/org/netbeans/core/startup/Bundle.properties @@ -1,5 +1,5 @@ #Updated by build script -#Wed, 25 Sep 2013 13:55:37 -0400 +#Fri, 08 Nov 2013 11:12:24 -0500 LBL_splash_window_title=Starting Autopsy SPLASH_HEIGHT=288 SPLASH_WIDTH=538 @@ -8,4 +8,4 @@ SplashRunningTextBounds=5,266,530,17 SplashRunningTextColor=0x0 SplashRunningTextFontSize=18 -currentVersion=Autopsy 3.0.7 +currentVersion=Autopsy 3.0.8 diff --git a/branding/modules/org-netbeans-core-windows.jar/org/netbeans/core/windows/view/ui/Bundle.properties b/branding/modules/org-netbeans-core-windows.jar/org/netbeans/core/windows/view/ui/Bundle.properties index 0dbd5e9a00..266f776266 100644 --- a/branding/modules/org-netbeans-core-windows.jar/org/netbeans/core/windows/view/ui/Bundle.properties +++ b/branding/modules/org-netbeans-core-windows.jar/org/netbeans/core/windows/view/ui/Bundle.properties @@ -1,5 +1,5 @@ -#Updated by build script -#Wed, 25 Sep 2013 13:55:37 -0400 - -CTL_MainWindow_Title=Autopsy 3.0.7 -CTL_MainWindow_Title_No_Project=Autopsy 3.0.7 +#Updated by build script +#Fri, 08 Nov 2013 11:12:24 -0500 + +CTL_MainWindow_Title=Autopsy 3.0.8 +CTL_MainWindow_Title_No_Project=Autopsy 3.0.8 diff --git a/build-windows.xml b/build-windows.xml index bcfea961f1..09795298a4 100644 --- a/build-windows.xml +++ b/build-windows.xml @@ -84,6 +84,14 @@ + + + + + + + + @@ -92,7 +100,7 @@ - + @@ -102,6 +110,14 @@ + + + + + + + + @@ -111,7 +127,7 @@ - + @@ -145,7 +161,7 @@ - + @@ -181,14 +197,6 @@ - - - - - - - - diff --git a/docs/doxygen/Doxyfile b/docs/doxygen/Doxyfile index b8d21a957d..3ec84adc77 100644 --- a/docs/doxygen/Doxyfile +++ b/docs/doxygen/Doxyfile @@ -663,6 +663,7 @@ WARN_LOGFILE = INPUT = main.dox \ workflow.dox \ + services.dox \ modContent.dox \ modDev.dox \ modIngest.dox \ diff --git a/docs/doxygen/main.dox b/docs/doxygen/main.dox index e79ac5f641..4159f893e6 100644 --- a/docs/doxygen/main.dox +++ b/docs/doxygen/main.dox @@ -1,23 +1,24 @@ -/*! \mainpage Autopsy Forensic Browser Developer's Guide and API Reference - -

Overview

-Autopsy has been designed as a platform for open source tools besides just The Sleuth Kit. This document is for developers who want to add functionality into Autopsy. This could be in the form of enhancing the existing functionality or by making a module that plugs into it and you may distribute from your own site or push it back into the base distribution. - -If you want to write modules, then these pages are for you: -- \subpage platform_page -- \subpage mod_dev_page -- The following are based on specific types of modules: - - \subpage mod_ingest_page - - \subpage mod_report_page - - \subpage mod_content_page - - \subpage mod_result_page -- \subpage adv_dev_page - -These pages are more detailed if you want to modify Autopsy code instead of writing add-on modules. -- \subpage workflow_page -- \subpage regression_test_page - - -*/ - - +/*! \mainpage Autopsy Forensic Browser Developer's Guide and API Reference + +

Overview

+Autopsy has been designed as a platform for open source tools besides just The Sleuth Kit. This document is for developers who want to add functionality into Autopsy. This could be in the form of enhancing the existing functionality or by making a module that plugs into it and you may distribute from your own site or push it back into the base distribution. + +If you want to write modules, then these pages are for you: +- \subpage platform_page +- \subpage mod_dev_page +- \subpage services_page +- The following are based on specific types of modules: + - \subpage mod_ingest_page + - \subpage mod_report_page + - \subpage mod_content_page + - \subpage mod_result_page +- \subpage adv_dev_page + +These pages are more detailed if you want to modify Autopsy code instead of writing add-on modules. +- \subpage workflow_page +- \subpage regression_test_page + + +*/ + + diff --git a/docs/doxygen/modAdvanced.dox b/docs/doxygen/modAdvanced.dox index 0efb82551b..5a2416f1a2 100644 --- a/docs/doxygen/modAdvanced.dox +++ b/docs/doxygen/modAdvanced.dox @@ -1,44 +1,46 @@ /*! \page adv_dev_page Advanced Develpment Concepts -\section mod_dev_adv Advanced Concepts - -These aren't really advanced, but you don't need to know them in detail when you start your first module. You'll want to refer back to them after you get started and wonder, "how do I do X". - - -\subsection mod_dev_adv_options Option Panels - - -Some modules may have configuration settings that uses can change. We recommend that you use the infrastructure provided by Autopsy and NetBeans to do this so that all module condiguration is done in a single place. - -To add a panel to the options menu, right click the module and choose New > Other. Under the Module Development category, select Options Panel and press Next. - -Select Create Primary Panel, name the panel (preferably with the module's name), select an icon, and add keywords, then click Next and Finish. Note that NetBeans will automatically copy the selected icon to the module's directory if not already there. - -NetBeans will generate two Java files for you, the panel and the controller. For now, we only need to focus on the panel. - -First, use NetBeans' GUI builder to design the panel. Be sure to include all options, settings, preferences, etc for the module, as this is what the user will see. The recommended size of an options panel is about 675 x 500. - -Second, in the source code of the panel, there are two important methods: \c load() and \c store(). When the options panel is opened via Tools > Options in Autopsy, the \c load() method will be called. Conversely, when the user presses OK after editing the options, the \c store() method will be called. - -If one wishes to make any additional panels within the original options panel, or panels which the original opens, Autopsy provides the org.sleuthkit.autopsy.corecomponents.OptionsPanel interface to help. This interface requires the \c store() and \c load() functions also be provided in the separate panels, allowing for easier child storing and loading. - -Any storing or loading of settings or properties should be done in the \c store() and \c load() methods. The next section, \ref mod_dev_adv_properties, has more details on doing this. - - -\subsection mod_dev_adv_properties Saving Settings and Properties - -It is recommended to have the module settings persistent, so that when a change is made and Autopsy is re-opened -the user made changes remain effective and not reset back to defaults. -Use org.sleuthkit.autopsy.coreutils.ModuleSettings class for saving and reading back settings for your module. - - -\subsection mod_dev_adv_events Registering for Events - -Autopsy will generate events as the application runs and modules may want to listen for those events so that they can change their state. There is not an exhaustive list of events, but here are some common ones to listen for: - -- Case change events occur when a case is opened, closed, or changed. The org.sleuthkit.autopsy.casemodule.Case.addPropertyChangeListener() method can be used for this. -- IngestManager events occur when new results are available. The org.sleuthkit.autopsy.ingest.IngestManager.addPropertyChangeListener() method can be used for this. - +\section mod_dev_adv Advanced Concepts + +These aren't really advanced, but you don't need to know them in detail when you start your first module. You'll want to refer back to them after you get started and wonder, "how do I do X". + + +\subsection mod_dev_adv_options Option Panels + + +Some modules may have configuration settings that uses can change. We recommend that you use the infrastructure provided by Autopsy and NetBeans to do this so that all module condiguration is done in a single place. + +Note: This option panel applies to all module types. Ingest modules have a second type of option panel that can be accessed when a data source is added to a case. Refer to \ref ingestmodule_making_configuration for details on how to use those option panels. + +To add a panel to the options menu, right click the module and choose New > Other. Under the Module Development category, select Options Panel and press Next. + +Select Create Primary Panel, name the panel (preferably with the module's name), select an icon, and add keywords, then click Next and Finish. Note that NetBeans will automatically copy the selected icon to the module's directory if not already there. + +NetBeans will generate two Java files for you, the panel and the controller. For now, we only need to focus on the panel. + +First, use NetBeans' GUI builder to design the panel. Be sure to include all options, settings, preferences, etc for the module, as this is what the user will see. The recommended size of an options panel is about 675 x 500. + +Second, in the source code of the panel, there are two important methods: \c load() and \c store(). When the options panel is opened via Tools > Options in Autopsy, the \c load() method will be called. Conversely, when the user presses OK after editing the options, the \c store() method will be called. + +If one wishes to make any additional panels within the original options panel, or panels which the original opens, Autopsy provides the org.sleuthkit.autopsy.corecomponents.OptionsPanel interface to help. This interface requires the \c store() and \c load() functions also be provided in the separate panels, allowing for easier child storing and loading. + +Any storing or loading of settings or properties should be done in the \c store() and \c load() methods. The next section, \ref mod_dev_adv_properties, has more details on doing this. + + +\subsection mod_dev_adv_properties Saving Settings and Properties + +It is recommended to have the module settings persistent, so that when a change is made and Autopsy is re-opened +the user made changes remain effective and not reset back to defaults. +Use org.sleuthkit.autopsy.coreutils.ModuleSettings class for saving and reading back settings for your module. + + +\subsection mod_dev_adv_events Registering for Events + +Autopsy will generate events as the application runs and modules may want to listen for those events so that they can change their state. There is not an exhaustive list of events, but here are some common ones to listen for: + +- Case change events occur when a case is opened, closed, or changed. The org.sleuthkit.autopsy.casemodule.Case.addPropertyChangeListener() method can be used for this. +- IngestManager events occur when new results are available. The org.sleuthkit.autopsy.ingest.IngestManager.addPropertyChangeListener() method can be used for this. + */ diff --git a/docs/doxygen/modDev.dox b/docs/doxygen/modDev.dox index 32d95deb72..840ff123e3 100644 --- a/docs/doxygen/modDev.dox +++ b/docs/doxygen/modDev.dox @@ -1,93 +1,93 @@ -/*! \page mod_dev_page Development Basics - - - -This page describes the basic concepts and setup that are needed regardless of the module type that you are building. - -\section mod_dev_setup Basic Setup - -\subsection mod_dev_setup_nb NetBeans and Java - -Autopsy is built on top of the NetBeans Rich Client Platform, which makes it easy to make plug-in infrastructures. To do any development, you really need to download NetBeans first. You can in theory develop modules by command line only, but this document assumes that you are using the IDE. Download and install the latest version of the IDE from http://www.netbeans.org. - -Autopsy currently requires Java 1.7. Ensure that it is installed. - -\subsection mod_dev_setup_platform Obtain the Autopsy Platform - -Before we can make a module, we must configure NetBeans to know about Autopsy as a platform. This will allow you to access all of the classes and services that Autopsy provides. There are two ways of configuring the NetBeans IDE to know about Autopsy: - -- Download an official release of Autopsy and build against it. -- Download Autopsy source code, build it, and make a platform to build against. - +/*! \page mod_dev_page Development Setup + + + +This page describes the basic concepts and setup that are needed regardless of the module type that you are building. + +\section mod_dev_setup Basic Setup + +\subsection mod_dev_setup_nb NetBeans and Java + +Autopsy is built on top of the NetBeans Rich Client Platform, which makes it easy to make plug-in infrastructures. To do any development, you really need to download NetBeans first. You can in theory develop modules by command line only, but this document assumes that you are using the IDE. Download and install the latest version of the IDE from http://www.netbeans.org. + +Autopsy currently requires Java 1.7. Ensure that it is installed. + +\subsection mod_dev_setup_platform Obtain the Autopsy Platform + +Before we can make a module, we must configure NetBeans to know about Autopsy as a platform. This will allow you to access all of the classes and services that Autopsy provides. There are two ways of configuring the NetBeans IDE to know about Autopsy: + +- Download an official release of Autopsy and build against it. +- Download Autopsy source code, build it, and make a platform to build against. + \subsubsection mod_dev_setup_platform_rel Using a Released Version - -The easiest method for obtaining the platform is to install Autopsy on your computer. It will have everything that you need. If you installed it in "C:\Program Files\Autopsy", then the platform is in "C:\Program Files\Autopsy\platform". You can now also download just the ZIP file of the Autopsy release instead of the MSI installer. This maybe more convenient for development situations. - -\subsubsection mod_dev_setup_platform_src Building a Platform from Code - -If you want to build against the bleeding edge code and updates that have occurred since the last release, then you must download the latest source code and build it. This involves getting a full development environment setup. Refer to the wiki page at http://wiki.sleuthkit.org/index.php?title=Autopsy_Developer%27s_Guide for details on getting the source code and a development environment setup. - -To use the latest Autopsy source code as your development environment, first follow BUILDING.TXT in the root source repository to properly build and setup Autopsy in NetBeans. - -Once Autopsy has been successfully built, right click on the Autopsy project in NetBeans and select Package as > ZIP Distribution. Once the ZIP file is created, extract its contents to a directory. This directory is the platform that you will build against. Note that you will building the module against this built platform. If you need to make changes to Autopsy infrastructure for your module, then you will need to then make a new ZIP file and configure your module to use it each time. - - -\section mod_dev_module Creating a Basic NetBeans Module - -The Autopsy modules are encapsulated inside of NetBeans modules. A NetBeans module will be packaged as a single ".nbm" file. A single NetBeans module can contain many Autopsy modules. The NetBeans module is what the user will install and provides things like auto-update. - -\subsection mod_dev_mod_nb Creating a NetBeans Module - -If this is your first module, then you will need to make a NetBeans module. If you have already made an Autopsy module and are now working on a second one, you can consider adding it to your pevious NetBeans module. - + +The easiest method for obtaining the platform is to install Autopsy on your computer. It will have everything that you need. If you installed it in "C:\Program Files\Autopsy", then the platform is in "C:\Program Files\Autopsy\platform". You can now also download just the ZIP file of the Autopsy release instead of the MSI installer. This maybe more convenient for development situations. + +\subsubsection mod_dev_setup_platform_src Building a Platform from Code + +If you want to build against the bleeding edge code and updates that have occurred since the last release, then you must download the latest source code and build it. This involves getting a full development environment setup. Refer to the wiki page at http://wiki.sleuthkit.org/index.php?title=Autopsy_Developer%27s_Guide for details on getting the source code and a development environment setup. + +To use the latest Autopsy source code as your development environment, first follow BUILDING.TXT in the root source repository to properly build and setup Autopsy in NetBeans. + +Once Autopsy has been successfully built, right click on the Autopsy project in NetBeans and select Package as > ZIP Distribution. Once the ZIP file is created, extract its contents to a directory. This directory is the platform that you will build against. Note that you will building the module against this built platform. If you need to make changes to Autopsy infrastructure for your module, then you will need to then make a new ZIP file and configure your module to use it each time. + + +\section mod_dev_module Creating a Basic NetBeans Module + +The Autopsy modules are encapsulated inside of NetBeans modules. A NetBeans module will be packaged as a single ".nbm" file. A single NetBeans module can contain many Autopsy modules. The NetBeans module is what the user will install and provides things like auto-update. + +\subsection mod_dev_mod_nb Creating a NetBeans Module + +If this is your first module, then you will need to make a NetBeans module. If you have already made an Autopsy module and are now working on a second one, you can consider adding it to your pevious NetBeans module. + To make a NetBeans module: -- Open the NetBeans IDE and go to File -> New Project. -- From the list of categories, choose "NetBeans Modules" and then "Module" from the list of "Projects". Click Next. -- In the next panel of the wizard, give the module a name and directory. Select Standalone Module (the default is typically "Add to Suite") so that you build the module as an external module against Autopsy. You will need to tell NetBeans about the Autopsy platform, so choose the "Manage" button. Choose the "Add Platform" button and browse to the location of the platform discussed in the previous sections (as a reminder this will either be the location that you installed Autopsy into or where you opened up the ZIP file you created from source). Click Next. -- Finally, enter the code base name. Press Finish. +- Open the NetBeans IDE and go to File -> New Project. +- From the list of categories, choose "NetBeans Modules" and then "Module" from the list of "Projects". Click Next. +- In the next panel of the wizard, give the module a name and directory. Select Standalone Module (the default is typically "Add to Suite") so that you build the module as an external module against Autopsy. You will need to tell NetBeans about the Autopsy platform, so choose the "Manage" button. Choose the "Add Platform" button and browse to the location of the platform discussed in the previous sections (as a reminder this will either be the location that you installed Autopsy into or where you opened up the ZIP file you created from source). Click Next. +- Finally, enter the code base name. Press Finish. \subsubsection mod_dev_mod_nb_config Configuring the NetBeans Module - -After the module is created, you will need to do some further configuration. -- Right click on the newly created module and choose "Properties". -- You will need to configure the module to be dependent on modules from within the Autopsy platform. Go to the "Libraries" area and choose "Add" in the "Module Dependencies" section. Choose the "Autopsy-core" library. You now have access to the Autopsy services. -- If you later determine that you need to pull in external JAR files, then you will use the "Wrapped Jar" section to add them in. -- Note, you will also need to come back to this section if you update the platform. You may need to add a new dependency for the version of the Autopsy-core that comes with the updated platform. -- Autopsy requires that all modules restart Autopsy after they are installed. Configure your module this way under Build -> Packaging. Check the box that says Needs Restart on Install. -You now have a NetBeans module that is using Autopsy as its build platform. That means you will have access to all of the services and utilities that Autopsy provides (such as \ref platform_details). - - -\subsubsection mod_dev_mod_config_other Optional Settings -There are several optional things in the Properties section. You can add a description and specify the version. You can do all of this later though and it does not need to be done before you start development. - -A link about the NetBeans versioning scheme can be found here http://wiki.netbeans.org/VersioningPolicy. +After the module is created, you will need to do some further configuration. +- Right click on the newly created module and choose "Properties". +- You will need to configure the module to be dependent on modules from within the Autopsy platform. Go to the "Libraries" area and choose "Add" in the "Module Dependencies" section. Choose the "Autopsy-core" library. You now have access to the Autopsy services. +- If you later determine that you need to pull in external JAR files, then you will use the "Wrapped Jar" section to add them in. +- Note, you will also need to come back to this section if you update the platform. You may need to add a new dependency for the version of the Autopsy-core that comes with the updated platform. +- Autopsy requires that all modules restart Autopsy after they are installed. Configure your module this way under Build -> Packaging. Check the box that says Needs Restart on Install. + +You now have a NetBeans module that is using Autopsy as its build platform. That means you will have access to all of the services and utilities that Autopsy provides (such as \ref platform_details). + + +\subsubsection mod_dev_mod_config_other Optional Settings +There are several optional things in the Properties section. You can add a description and specify the version. You can do all of this later though and it does not need to be done before you start development. + +A link about the NetBeans versioning scheme can be found here http://wiki.netbeans.org/VersioningPolicy. Autopsy follows this scheme and a link to the details can be found at http://wiki.sleuthkit.org/index.php?title=Autopsy_3_Module_Versions. - -\subsection mod_dev_mod_other Other Links - -For general NetBeans module information, refer to this guide from NetBeans.org. - - -\section mod_dev_aut Creating Autopsy Modules - -You can now add Autopsy modules into the NetBeans container module. There are other pages that focus on that and are listed on the main page. The rest of this document contains info that you will eventually want to come back to though. -As you will read in the later sections about the different module types, each Autopsy Module is a java class that extends an interface (the interface depends on the type of module). - - -\subsection mod_dev_aut_run1 Running Your Module During Development - -When you are developing your Autopsy module, you can simply choose "Run" on the module and it will launch the Autopsy platform with the module enabled in it. This is also how you can debug the module. - -\subsection mod_dev_aut_deploy Deploying Your Module - -When you are ready to share your module, create an NBM file by right clicking on the module and selecting "Create NBM". - -\subsection mod_dev_aut_install Installing Your Module - -To install the module on a non-development environment, launch Autopsy and choose Plugins under the Tools menu. Open the Downloaded tab and click Add Plugins. Navigate to the NBM file and open it. Next, click Install and follow the wizard. - - -*/ + +\subsection mod_dev_mod_other Other Links + +For general NetBeans module information, refer to this guide from NetBeans.org. + + +\section mod_dev_aut Creating Autopsy Modules + +You can now add Autopsy modules into the NetBeans container module. There are other pages that focus on that and are listed on the main page. The rest of this document contains info that you will eventually want to come back to though. +As you will read in the later sections about the different module types, each Autopsy Module is a java class that extends an interface (the interface depends on the type of module). + + +\subsection mod_dev_aut_run1 Running Your Module During Development + +When you are developing your Autopsy module, you can simply choose "Run" on the module and it will launch the Autopsy platform with the module enabled in it. This is also how you can debug the module. + +\subsection mod_dev_aut_deploy Deploying Your Module + +When you are ready to share your module, create an NBM file by right clicking on the module and selecting "Create NBM". + +\subsection mod_dev_aut_install Installing Your Module + +To install the module on a non-development environment, launch Autopsy and choose Plugins under the Tools menu. Open the Downloaded tab and click Add Plugins. Navigate to the NBM file and open it. Next, click Install and follow the wizard. + + +*/ diff --git a/docs/doxygen/modIngest.dox b/docs/doxygen/modIngest.dox index 272d9218fc..2512a96c37 100644 --- a/docs/doxygen/modIngest.dox +++ b/docs/doxygen/modIngest.dox @@ -1,249 +1,265 @@ -/*! \page mod_ingest_page Developing Ingest Modules - - -\section ingestmodule_modules Ingest Module Basics - -This section tells you how to make an Ingest Module. Ingest modules -analyze data from a data source (a disk image or set of logical -files). They typically focus on a specific type of data analysis. -The modules are loaded each time that Autopsy starts. The user can -choose to enable each module when they add an image to the case. -It assumes you have already setup your development environment as -described in \ref mod_dev_page. - -First, you need to choose the type of Ingest Module. - -- Data Source-level modules are passed in a reference to a top-level data source, such as an Image or folder of logical files. -These modules may query the database for a small set of specific files. For example, a Windows registry module that runs on the hive files. It is interested in only a small subset of the hard drive files. - -- File-level modules are passed in a reference to each file. -The Ingest Manager chooses which files to pass and when. -These modules are intended to analyze most of the files on the system -For example, a hash calculation module that reads in the content of every file. - - - -Refer to org.sleuthkit.autopsy.ingest.example for sample source code of dummy modules. - -\section ingest_common Commonalities - -There are several things about these module types that are common and we'll outline those here. For both modules, you will extend an interface and implement some methods. - -Refer to the documentation for each method for its use. -- org.sleuthkit.autopsy.ingest.IngestModuleAbstract.init() is invoked when an ingest session starts. -- org.sleuthkit.autopsy.ingest.IngestModuleAbstract.complete() is invoked when an ingest session completes. -- org.sleuthkit.autopsy.ingest.IngestModuleAbstract.stop() is invoked on a module when an ingest session is interrupted by the user or system. -- org.sleuthkit.autopsy.ingest.IngestModuleAbstract.getName() returns the name of the module. -- org.sleuthkit.autopsy.ingest.IngestModuleAbstract.getDescription() returns a short description of the module. -- org.sleuthkit.autopsy.ingest.IngestModuleAbstract.getVersion() returns the version of the module. - - -The process() method is invoked to analyze the data. This is where -the analysis is done. The specific method depends on the module -type; it is passed either a data source or a file to process. We'll -cover this in later sections. This method will post results to the -blackboard and with inbox messages to the user. - - -\section ingest_datasrc Data Source-level Modules - -To make a data source-level module, make a new Java class either manually or using the NetBeans wizards. Edit the class to extend "org.sleuthkit.autopsy.ingest.IngestModuleDataSource". NetBeans will likely complain that you have not implemented the necessary methods and you can use its "hints" to automatically generate stubs for them. Use the documentation for the org.sleuthkit.autopsy.ingest.IngestModuleDataSource class for details on what each needs to do. -You can also refer to org.sleuthkit.autopsy.examples.SampleDataSourceIngestModule as an example module. - -Example snippet of an ingest-level module process() method: - -\code -@Override -public void process(Content dataSource, IngestDataSourceWorkerController controller) { - - //we have some number workunits / sub-tasks to execute - //in this case, we know the number of total tasks in advance - final int totalTasks = 12; - - //initialize the overall image ingest progress - controller.switchToDeterminate(); - controller.progress(totalTasks); - - for(int subTask = 0; subTask < totalTasks; ++subTask) { - //add cancellation support - if (controller.isCancelled() ) { - break; // break out early to let the thread terminate - } - - //do the work - try { - //sub-task may add blackboard artifacts and create an inbox message - performSubTask(i); - } catch (Exception ex) { - logger.log(Level.WARNING, "Exception occurred in subtask " + subTask, ex); - } - - //update progress - controller.progress(i+1); - } -} -\endcode - - -\section ingest_file File-level Modules - -To make a File-level module, make a new Java class either manually or using the NetBeans wizards. Edit the class to extend "org.sleuthkit.autopsy.ingest.IngestModuleAbstractFile". NetBeans will likely complain that you have not implemented the necessary methods and you can use its "hints" to automatically generate stubs for them. Use the method documentation in the org.sleuthkit.autopsy.ingest.IngestModuleAbstractFile class to fill in the details. -You can also refer to org.sleuthkit.autopsy.examples.SampleFileIngestModule as an example module. - -Unlike Data Source-level modules, file-level modules are singletons. Only a single instance is created for all files. -The same file-level module instance will be used for files in different images and even different cases if new cases are opened. - -Every file-level module should support multiple init() -> process() -> complete(), and init() -> process() -> stop() invocations. It should also support init() -> complete() sequences. A new case could be open for each call of init(). - -Currently (and this is likely to change in the future), File-level ingest modules are Singletons (meaning that only a single instance is created for the runtime of Autopsy). -You will need to implement a public static getDefault() method that returns a static instance of the module. Note that if you skip this step, you will not see an error until Autopsy tries to load your module and the log will say that it does not have a getDefault method. - -The implementation of this method is very standard, example: - -\code -public static synchronized MyIngestModule getDefault() { - - //defaultInstance is a private static class variable - if (defaultInstance == null) { - defaultInstance = new MyIngestModule(); - } - return defaultInstance; -} -\endcode - - -You should also make the constructor private to ensure the singleton status. - -As a result of the singleton design, init() will be called multiple times and even for different cases. Ensure that you update local member variables accordingly each time init() is called. Again, this design will likely change, but it is what it is for now. - - -\section ingestmodule_registration Module Registration - -Modules are automatically discovered if they implement the proper interface. -Currently, a restart of Autopsy is required after a module is installed before it is discovered. - -By default, modules that do not come with a standard Autopsy installation will run after the standard modules. No order -is implied. This design will likely change in the future, but currently manual configuration is needed to enforce order. - - -There is an XML pipeline configuration that contains the standard modules and specifies the order that they are run in. -If you need to specify the order of modules, then they needed to be manually addded to this file in the correct order. -This file is the same format as The Sleuth Kit Framework configuration file. -Refer to http://sleuthkit.org/sleuthkit/docs/framework-docs/pipeline_config_page.html which is an official documentation -for the pipeline configuration schema. - -Autopsy will provide tools for reconfiguring the ingest pipeline in the near future, -and user/developer will be able to reload current view of discovered modules, -reorder modules in the pipeline and set their arguments using GUI. - - -\section ingestmodule_services Ingest Services - -Class org.sleuthkit.autopsy.ingest.IngestServices provides services specifically for the ingest modules -and a module developer should use these utilities to send messages, get current case, etc. Refer to its documentation for method details. - -Remember, update references to IngestServices and Cases with each call to init() inside of the module. - -Module developers are encouraged to use Autopsy's org.sleuthkit.autopsy.coreutils.Logger -infrastructure to log errors to the Autopsy log. -The logger can also be accessed using the org.sleuthkit.autopsy.ingest.IngestServices class. - -Certain modules may need need a persistant store (other than for storing results) for storing and reading -module configurations or state. -The ModuleSettings API can be used also via org.sleuthkit.autopsy.ingest.IngestServices class. - - -\section ingestmodule_making_results Posting Results - -Ingest modules run in the background. There are three ways to send messages and save results: -- Blackboard for long-term storage of analysis results and display in the results tree. -- Ingest Inbox to notify user of high-value analysis results that were also posted to blackboard. -- Error messages. - -\subsection ingestmodule_making_results_bb Posting Results to Blackboard -The blackboard is used to store results so that they are displayed in the results tree. See \ref platform_blackboard for details on posting results to it. - -When modules add data to the blackboard, -modules should notify listeners of the new data by -invoking IngestServices.fireModuleDataEvent() method. -Do so as soon as you have added an artifact to the blackboard. -This allows other modules (and the main UI) to know when to query the blackboard for the latest data. -However, if you are writing a larger number of blackboard artifacts in a loop, it is better to invoke -IngestServices.fireModuleDataEvent() only once after the bulk write, not to flood the system with events. - -\subsection ingestmodule_making_results_inbox Posting Results to Message Inbox - -Modules should post messages to the inbox when interesting data is found that has also been posted to the blackboard. -The idea behind these messages are that they are presented in chronological order so that users can see what was -found while they were focusing on something else. - - -These messages should only be sent if the result has a low false positive rate and will likely be relevant. -For example, the hash lookup module will send messages if known bad (notable) files are found, -but not if known good (NSRL) files are found. You can provide options to the users on when to make messages. - - -A single message includes the module name, message subject, message details, -a unique message id (in the context of the originating module), and a uniqueness attribute. -The uniqueness attribute is used to group similar messages together -and to determine the overall importance priority of the message -(if the same message is seen repeatedly, it is considered lower priority). - -For example, for a keyword search module, the uniqueness attribute would the keyword that was hit. - -Messages are created using the org.sleuthkit.autopsy.ingest.IngestMessage class and posted -using methods in \ref ingestmodule_services. - - -\section ingestmodule_making_configuration Module Configuration - -Ingest modules may require user configuration. In \ref mod_dev_adv_options, you learned about Autopsy-wide settings. There are some -settings that are specific to ingest modules as well. - -The framework -supports two levels of configuration: simple and advanced. Simple settings enable the user to enable and disable basic things at run-time (using check boxes and such). -Advanced settings require more in-depth configuration with more powerful interface. - -As an example, the advanced configuration for the keyword search module allows you to add and create keyword lists, choose encodings, etc. The simple interface allows -you to enable and disable lists. - -Module configuration is module-specific: every module maintains its own configuration state and is responsible for implementing the graphical interface. -If a module needs simple or advanced configuration, it needs to implement methods in its interface. -The org.sleuthkit.autopsy.ingest.IngestModuleAbstract.hasSimpleConfiguration(), -org.sleuthkit.autopsy.ingest.IngestModuleAbstract.getSimpleConfiguration(), and org.sleuthkit.autopsy.ingest.IngestModuleAbstract.saveSimpleConfiguration() -methods should be used for simple configuration. This panel will be shown when the user chooses which ingest modules to enable. - -The advanced configuration is implemented with the -org.sleuthkit.autopsy.ingest.IngestModuleAbstract.hasAdvancedConfiguration(), -org.sleuthkit.autopsy.ingest.IngestModuleAbstract.getAdvancedConfiguration(), and -org.sleuthkit.autopsy.ingest.IngestModuleAbstract.saveAdvancedConfiguration() -methods. This panel can be accessed from the "Advanced" button when the user chooses which ingest modules to enable. -It is recommended that the advanced panel be the same panel that is used in the Options area (see \ref mod_dev_adv_options). - -Refer to \ref mod_dev_adv_properties for details on saving properties from these panels. - - - - -*/ +/*! \page mod_ingest_page Developing Ingest Modules + + +\section ingestmodule_modules Ingest Module Basics + +This section tells you how to make an Ingest Module. Ingest modules +analyze data from a data source (a disk image or set of logical +files). They typically focus on a specific type of data analysis. +The modules are loaded each time that Autopsy starts. The user can +choose to enable each module when they add an image to the case. +It assumes you have already setup your development environment as +described in \ref mod_dev_page. + +First, you need to choose the type of Ingest Module. + +- Data Source-level modules are passed in a reference to a top-level data source, such as an Image or folder of logical files. +These modules may query the database for a small set of specific files. For example, a Windows registry module that runs on the hive files. It is interested in only a small subset of the hard drive files. + +- File-level modules are passed in a reference to each file. +The Ingest Manager chooses which files to pass and when. +These modules are intended to analyze most of the files on the system +For example, a hash calculation module that reads in the content of every file. + + + +Refer to org.sleuthkit.autopsy.ingest.example for sample source code of dummy modules. + +\section ingest_common Commonalities + +There are several things about these module types that are common and we'll outline those here. For both modules, you will extend an interface and implement some methods. + +Refer to the documentation for each method for its use. +- org.sleuthkit.autopsy.ingest.IngestModuleAbstract.init() is invoked when an ingest session starts. +- org.sleuthkit.autopsy.ingest.IngestModuleAbstract.complete() is invoked when an ingest session completes. +- org.sleuthkit.autopsy.ingest.IngestModuleAbstract.stop() is invoked on a module when an ingest session is interrupted by the user or system. +- org.sleuthkit.autopsy.ingest.IngestModuleAbstract.getName() returns the name of the module. +- org.sleuthkit.autopsy.ingest.IngestModuleAbstract.getDescription() returns a short description of the module. +- org.sleuthkit.autopsy.ingest.IngestModuleAbstract.getVersion() returns the version of the module. + + +The process() method is invoked to analyze the data. This is where +the analysis is done. The specific method depends on the module +type; it is passed either a data source or a file to process. We'll +cover this in later sections. This method will post results to the +blackboard and with inbox messages to the user. + + +\section ingest_datasrc Data Source-level Modules + +To make a data source-level module, make a new Java class either manually or using the NetBeans wizards. Edit the class to extend "org.sleuthkit.autopsy.ingest.IngestModuleDataSource". NetBeans will likely complain that you have not implemented the necessary methods and you can use its "hints" to automatically generate stubs for them. Use the documentation for the org.sleuthkit.autopsy.ingest.IngestModuleDataSource class for details on what each needs to do. +You can also refer to org.sleuthkit.autopsy.examples.SampleDataSourceIngestModule as an example module. + + +Data source-level ingest modules must find the files that they want to analyze. The best way to do that is using one of the findFiles() methods in org.sleuthkit.autopsy.casemodule.services.FileManager. See \ref mod_dev_other_services for more details. + +Example snippet of an ingest-level module process() method: + +\code +@Override +public void process(Content dataSource, IngestDataSourceWorkerController controller) { + + //we have some number workunits / sub-tasks to execute + //in this case, we know the number of total tasks in advance + final int totalTasks = 12; + + //initialize the overall image ingest progress + controller.switchToDeterminate(); + controller.progress(totalTasks); + + for(int subTask = 0; subTask < totalTasks; ++subTask) { + //add cancellation support + if (controller.isCancelled() ) { + break; // break out early to let the thread terminate + } + + //do the work + try { + //sub-task may add blackboard artifacts and create an inbox message + performSubTask(i); + } catch (Exception ex) { + logger.log(Level.WARNING, "Exception occurred in subtask " + subTask, ex); + } + + //update progress + controller.progress(i+1); + } +} +\endcode + + +\section ingest_file File-level Modules + +To make a File-level module, make a new Java class either manually or using the NetBeans wizards. Edit the class to extend "org.sleuthkit.autopsy.ingest.IngestModuleAbstractFile". NetBeans will likely complain that you have not implemented the necessary methods and you can use its "hints" to automatically generate stubs for them. Use the method documentation in the org.sleuthkit.autopsy.ingest.IngestModuleAbstractFile class to fill in the details. +You can also refer to org.sleuthkit.autopsy.examples.SampleFileIngestModule as an example module. + +Unlike Data Source-level modules, file-level modules are singletons. Only a single instance is created for all files. +The same file-level module instance will be used for files in different images and even different cases if new cases are opened. + +Every file-level module should support multiple init() -> process() -> complete(), and init() -> process() -> stop() invocations. It should also support init() -> complete() sequences. A new case could be open for each call of init(). + +Currently (and this is likely to change in the future), File-level ingest modules are Singletons (meaning that only a single instance is created for the runtime of Autopsy). +You will need to implement a public static getDefault() method that returns a static instance of the module. Note that if you skip this step, you will not see an error until Autopsy tries to load your module and the log will say that it does not have a getDefault method. + +The implementation of this method is very standard, example: + +\code +public static synchronized MyIngestModule getDefault() { + + //defaultInstance is a private static class variable + if (defaultInstance == null) { + defaultInstance = new MyIngestModule(); + } + return defaultInstance; +} +\endcode + + +You should also make the constructor private to ensure the singleton status. + +As a result of the singleton design, init() will be called multiple times and even for different cases. Ensure that you update local member variables accordingly each time init() is called. Again, this design will likely change, but it is what it is for now. + + +\section ingestmodule_registration Module Registration + +Modules are automatically discovered if they implement the proper interface. +Currently, a restart of Autopsy is required after a module is installed before it is discovered. + +By default, modules that do not come with a standard Autopsy installation will run after the standard modules. No order +is implied. This design will likely change in the future, but currently manual configuration is needed to enforce order. + + +There is an XML pipeline configuration that contains the standard modules and specifies the order that they are run in. +If you need to specify the order of modules, then they needed to be manually addded to this file in the correct order. +This file is the same format as The Sleuth Kit Framework configuration file. +Refer to http://sleuthkit.org/sleuthkit/docs/framework-docs/pipeline_config_page.html which is an official documentation +for the pipeline configuration schema. + +Autopsy will provide tools for reconfiguring the ingest pipeline in the near future, +and user/developer will be able to reload current view of discovered modules, +reorder modules in the pipeline and set their arguments using GUI. + + +\section ingestmodule_services Ingest Services + +Class org.sleuthkit.autopsy.ingest.IngestServices provides services specifically for the ingest modules +and a module developer should use these utilities to send messages, get current case, etc. Refer to its documentation for method details. + +Remember, update references to IngestServices and Cases with each call to init() inside of the module. + +Module developers are encouraged to use Autopsy's org.sleuthkit.autopsy.coreutils.Logger +infrastructure to log errors to the Autopsy log. +The logger can also be accessed using the org.sleuthkit.autopsy.ingest.IngestServices class. + +Certain modules may need need a persistant store (other than for storing results) for storing and reading +module configurations or state. +The ModuleSettings API can be used also via org.sleuthkit.autopsy.ingest.IngestServices class. + + +\section ingestmodule_making_results Making Results Available to User + +Ingest modules run in the background. There are three ways to send messages and save results so that the user can see them: +- Blackboard for long-term storage of analysis results and to display in the results tree. +- Ingest Inbox to notify user of high-value analysis results that were also posted to blackboard. +- Error messages. + +\subsection ingestmodule_making_results_bb Posting Results to Blackboard +The blackboard is used to store results so that they are displayed in the results tree. See \ref platform_blackboard for details on posting results to it. + +The blackboard defines artifacts for specific data types (such as web bookmarks). You can use one of the standard artifact types, create your own, or simply post text with a org.sleuthkit.datamodel.BlackboardArtifact.ARTIFACT_TYPE.TSK_TOOL_OUTPUT. The later is much easier (for example, you can simply copy in the output from an existing tool), but it forces the user to parse the output themselves. + +When modules add data to the blackboard, +they should notify listeners of the new data by +invoking IngestServices.fireModuleDataEvent() method. +Do so as soon as you have added an artifact to the blackboard. +This allows other modules (and the main UI) to know when to query the blackboard for the latest data. +However, if you are writing a larger number of blackboard artifacts in a loop, it is better to invoke +IngestServices.fireModuleDataEvent() only once after the bulk write, not to flood the system with events. + + +\subsection ingestmodule_making_results_inbox Posting Results to Message Inbox + +Modules should post messages to the inbox when interesting data is found +that has also been posted to the blackboard. The idea behind these +messages are that they are presented in chronological order so that +users can see what was found while they were focusing on something else. +Error messages are also sent here as is summary information after the module has run to give the user some feedback. + + +These messages should only be sent if the result has a low false positive rate and will likely be relevant. +For example, the hash lookup module will send messages if known bad (notable) files are found, +but not if known good (NSRL) files are found. You can provide options to the users on when to make messages. + + +A single message includes the module name, message subject, message details, +a unique message id (in the context of the originating module), and a uniqueness attribute. +The uniqueness attribute is used to group similar messages together +and to determine the overall importance priority of the message +(if the same message is seen repeatedly, it is considered lower priority). + +For example, for a keyword search module, the uniqueness attribute would the keyword that was hit. + +Messages are created using the org.sleuthkit.autopsy.ingest.IngestMessage class and posted to the inbox using org.sleuthkit.autopsy.ingest.IngestServices.postMessage() method. + + +\subsection ingestmodule_making_results_error Reporting Errors + +When an error occurs, you should send a message to the ingest inbox with an error level. The downside of this though is that the ingest inbox was not entirely designed for this goal and it is easy for the user to miss these messages. Therefore, we identify these messages in the IngestInbox and also post a pop-up message that comes up in the lower right. + +You can make your own message in the lower right by using +org.sleuthkit.autopsy.coreutils.MessageNotifyUtil.Notify.show() + + + +\section ingestmodule_making_configuration Module Configuration + +Ingest modules may require user configuration. In \ref mod_dev_adv_options, you wll learn about Autopsy-wide settings. There are some +settings that are specific to ingest modules as well. + +The framework +supports two levels of configuration: simple and advanced. Simple settings enable the user to enable and disable basic things at run-time (using check boxes and such). +Advanced settings require more in-depth configuration with more powerful interface. + +As an example, the advanced configuration for the keyword search module allows you to add and create keyword lists, choose encodings, etc. The simple interface allows +you to enable and disable lists. + +Module configuration is module-specific: every module maintains its own configuration state and is responsible for implementing the graphical interface. +If a module needs simple or advanced configuration, it needs to implement methods in its interface. +The org.sleuthkit.autopsy.ingest.IngestModuleAbstract.hasSimpleConfiguration(), +org.sleuthkit.autopsy.ingest.IngestModuleAbstract.getSimpleConfiguration(), and org.sleuthkit.autopsy.ingest.IngestModuleAbstract.saveSimpleConfiguration() +methods should be used for simple configuration. This panel will be shown when the user chooses which ingest modules to enable. + +The advanced configuration is implemented with the +org.sleuthkit.autopsy.ingest.IngestModuleAbstract.hasAdvancedConfiguration(), +org.sleuthkit.autopsy.ingest.IngestModuleAbstract.getAdvancedConfiguration(), and +org.sleuthkit.autopsy.ingest.IngestModuleAbstract.saveAdvancedConfiguration() +methods. This panel can be accessed from the "Advanced" button when the user chooses which ingest modules to enable. +It is recommended that the advanced panel be the same panel that is used in the Options area (see \ref mod_dev_adv_options). + +Refer to \ref mod_dev_adv_properties for details on saving properties from these panels. + + + + +*/ diff --git a/docs/doxygen/modReport.dox b/docs/doxygen/modReport.dox index 8913ccfdbe..d3cfaf10cb 100644 --- a/docs/doxygen/modReport.dox +++ b/docs/doxygen/modReport.dox @@ -1,24 +1,36 @@ /*! \page mod_report_page Developing Report Modules \section report_summary Summary -Report modules allow you to create different report types. Autopsy comes with moules to generate HTML and a body file for timeline creation. You can made additional modules to create custom output formats. +Report modules allow you to create different report types. Autopsy comes with modules to generate HTML and Excel artifact reports, a tab delimited File report, and a body file for timeline creation. You can made additional modules to create custom output formats. -There are two types of reporting modules that differ in how the data is organized. +There are three types of reporting modules that differ in how the data is organized. - General report modules are free form and you are allowed to organize the output however you want. - Table report modules organize the data into tables. If your output is in table format, this type of module will be easier to make the module because Autopsy does a lot of the organizing work for you. +- File report modules are also table based, but they specifically deal with reporting on the Files in the case, not artifacts. -Each reporting submodule implements either the org.sleuthkit.autopsy.report.TableReportModule interface or the org.sleuthkit.autopsy.report.GeneralReportModule interface, and registers itself in layer.xml +Each reporting submodule implements one of the org.sleuthkit.autopsy.report.TableReportModule interface, the org.sleuthkit.autopsy.report.GeneralReportModule interface, or the org.sleuthkit.autopsy.report.FileReportModule interface and registers itself in layer.xml -Implementing either of those interfaces will require the reporting module to implement a number of abstract methods. And depending on the type of report module, different methods will be invoked by the application. +Implementing any of those interfaces will require the reporting module to implement a number of abstract methods. And depending on the type of report module, different methods will be invoked by the application. Table report modules require their sub-classes to override methods to start and end tables, and add rows to those tables. These methods are provided data, generated from a default configuration panel, for the module to report on. Because of this, when creating a table report module one only needs to focus on how to display the data, not how to find it. +File report modules are similar to table report modules, but only require their sub-classes to start and end a single table, and add rows to that table. The methods are given an AbstractFile and a list of FileReportDataTypes, which specify what information about the file should be added to the report. The data can be extracted from the file by calling the FileReportDataTypes getValue method with the file as it's argument. + On the other hand, general report modules have a single method to generate the report. This method gives the module freedom to find and process any data it so chooses. General modules also have the ability to provide a configuration panel, allowing the user to choose from various displayed settings. The report module may then use the user's selection to generate a more specific report. -General modules are also given the responsibility of updating their report's progress bar and processing label in the UI. A progress panel is given to every general report module. It contains basic API to start, stop, and add to the progress bar, as well as update the processing label. The module is also expeted to check the progress bar's status occasionally to see if the user has manually canceled the report. +General modules are also given the responsibility of updating their report's progress bar and processing label in the UI. A progress panel is given to every general report module. It contains basic API to start, stop, and add to the progress bar, as well as update the processing label. The module is also expected to check the progress bar's status occasionally to see if the user has manually canceled the report. \section report_create_module Creating a Report Module -To create a table report module, start off by creating a class and implementing either the TableReportModule interface or the GeneralReportModule interface. +To create a table report module, start off by creating a class and implementing one of the TableReportModule interface, the FileReportModule interface, or the GeneralReportModule interface. + +\section report_create_module_all All Report Modules +All report modules will need to override the following methods: +- org.sleuthkit.autopsy.report.ReportModule::getName() +- org.sleuthkit.autopsy.report.ReportModule::getDescription() +- org.sleuthkit.autopsy.report.ReportModule::getExtension() +- org.sleuthkit.autopsy.report.ReportModule::getFilePath() + +These will be called by Autopsy to set up the configuration panels and other information. \subsection report_create_module_table Table Report Modules If you implement TableReportModule, you should override the methods: @@ -35,7 +47,17 @@ If you implement TableReportModule, you should override the methods: - org.sleuthkit.autopsy.report.TableReportModule::addRow(List row) - org.sleuthkit.autopsy.report.TableReportModule::dateToString(long date) -When generating table module reports, Autopsy will iterate through a list of user selected data, and call methods such as addRow(List row) for every "row" of data it finds, or startTable(List titles) for every new category it finds. Developers are guarenteed that every start of a data type, set, or table will be followed by an approptiate end. The focus for a table report module should be to take the given information and display it in a user friendly format. See org.sleuthkit.autopsy.report.ReportExcel for an example. +When generating table module reports, Autopsy will iterate through a list of user selected data, and call methods such as addRow(List row) for every "row" of data it finds, or startTable(List titles) for every new category it finds. Developers are guaranteed that every start of a data type, set, or table will be followed by an appropriate end. The focus for a table report module should be to take the given information and display it in a user friendly format. See org.sleuthkit.autopsy.report.ReportExcel for an example. + +\subsection report_create_module_file File Report Modules +If you implement FileReportModule, the overriden methods will be: +- org.sleuthkit.autopsy.report.FileReportModule::startReport(String path) +- org.sleuthkit.autopsy.report.FileReportModule::endReport() +- org.sleuthkit.autopsy.report.FileReportModule::startTable(List headers) +- org.sleuthkit.autopsy.report.FileReportModule::endTable() +- org.sleuthkit.autopsy.report.FileReportModule::addRow(AbstractFile toAdd, List columns) + +As when generating table module reports, Autopsy will iterate through a list of user selected data (which are represented by FileReportDataTypes), and call addRow(AbstractFile toAdd, List columns) for every abstract file in the case. Developers are guaranteed that the order of method calls will be startReport(), startTable(List headers), addRow(AbstractFile toAdd, List columns), AbstractFile toAdd, List columns),..., endTable(), endReport(). \subsection report_create_module_general General Report Modules If you implement GeneralReportModule, the overriden methods will be: diff --git a/docs/doxygen/platformConcepts.dox b/docs/doxygen/platformConcepts.dox index 72701c5b5c..e75fe4d584 100644 --- a/docs/doxygen/platformConcepts.dox +++ b/docs/doxygen/platformConcepts.dox @@ -1,65 +1,32 @@ -/*! \page platform_page Platform Concepts - -\section platform_basics Basic Concepts - -These are the basic concepts that you should be aware of before writing a module: - -- Phases: The platform has been design to support different phases in the investigation process: - - Case Creation: Use wizards to create a new case. - - Data Source Adding: Where disk images and logical files are added to a case and file systems in disk images are analyzed to populate the database. The end result of this phase is that the central database has a basic record of each file so that it can be analyzed. This happens in the Add Image Wizard. - - Ingest Module Analysis: A variety of analysis modules then run on the files referenced in the database to perform specific tasks. - - Browsing and searching: User manually browses and searches the data using the user interface. They can browse through the results from the ingest modules that may still be running in the background. +/*! \page platform_page Platform Concepts + +\section platform_basics Basic Concepts + +These are the basic concepts that you should be aware of before writing a module: + +- Phases: The platform has been design to support different phases in the investigation process: + - Case Creation: Use wizards to create a new case. + - Data Source Adding: Where disk images and logical files are added to a case and file systems in disk images are analyzed to populate the database. The end result of this phase is that the central database has a basic record of each file so that it can be analyzed. This happens in the Add Image Wizard. + - Ingest Module Analysis: A variety of analysis modules then run on the files referenced in the database to perform specific tasks. + - Browsing and searching: User manually browses and searches the data using the user interface. They can browse through the results from the ingest modules that may still be running in the background. - Report: A final report is generated at the end of the case. - Central Database: All data except for the disk image is stored in a SQLite database. This includes information about what files exist in a disk image and the output from modules. Access to this database can be found from the org.sleuthkit.datamodel.SleuthkitCase class, but you'll probably never need to directly interact with it. The services and data model classes will interact with it. -- Case: A case class (org.sleuthkit.autopsy.casemodule.Case) is the top-level object for the data being analyzed. From here, you can access all of the files and query it. +- Case: A case class (org.sleuthkit.autopsy.casemodule.Case) is the top-level object for the data being analyzed. From here, you can access all of the files and query it. - Blackboard: The platform uses the blackboard to enable modules to communicate with each other and to display data in the GUI. See the \ref platform_blackboard section for more details. -- Services: There are services provided by the platform. See the \ref mod_dev_other_services section for more details. +- Services: There are services provided by the platform. See the \ref mod_dev_other_services section for more details. - Utilities: There are core utilities that the platform provides to modules. See the \ref mod_dev_other_utilities section for more details. -- Single tree: Results from the various modules can generally be found in a single tree. This makes it easy for users to find their results. - - -\section platform_frameworks Frameworks in the Platform -Autopsy was designed to be an extensible platform for other developers to leverage. There are several places in the platform where plug-in modules can be applied. -- Ingest Modules: These modules are run when a new data source is added to a case (and can be re-run afterwards too). These modules come in two forms: - - File Ingest Modules are called for every file in the image. Use this type of module if you want to examine the contents of all or most of the files. Examples include hash calculation, hash lookup, file type identification, and entropy calculation. +- Single tree: Results from the various modules can generally be found in a single tree. This makes it easy for users to find their results. + + +\section platform_frameworks Frameworks in the Platform +Autopsy was designed to be an extensible platform for other developers to leverage. There are several places in the platform where plug-in modules can be applied. +- Ingest Modules: These modules are run when a new data source is added to a case (and can be re-run afterwards too). These modules come in two forms: + - File Ingest Modules are called for every file in the image. Use this type of module if you want to examine the contents of all or most of the files. Examples include hash calculation, hash lookup, file type identification, and entropy calculation. - Data Source Ingest Modules are called once for every image or set of logical files. These modules can use the database to query for one or more files and perform analysis on them. Examples include web artifact analysis and searches that can rely only file names and extensions. See \ref mod_ingest_page for details on building these modules. -- Report Modules: These modules create different types of outputs that contain the analysis results. See \ref mod_report_page for details on creating these modules. -- Content Viewers: These modules show information about a specific file. These are the modules in the lower right of the interface. The platform comes with viewers to view the file in hexadecimal, extract the strings from the file, and view images and movies. See \ref mod_content_page for details on creating these modules. -- Result Viewers: These modules show information about a set of files. These modules are in the upper right of the interface. The platform comes with viewers to view the set of files in a table and thumbnails. See \ref mod_result_page for details on creating these modules. +- Report Modules: These modules create different types of outputs that contain the analysis results. See \ref mod_report_page for details on creating these modules. +- Content Viewers: These modules show information about a specific file. These are the modules in the lower right of the interface. The platform comes with viewers to view the file in hexadecimal, extract the strings from the file, and view images and movies. See \ref mod_content_page for details on creating these modules. +- Result Viewers: These modules show information about a set of files. These modules are in the upper right of the interface. The platform comes with viewers to view the set of files in a table and thumbnails. See \ref mod_result_page for details on creating these modules. -\section platform_details More Details -This section expands on the concepts that were previously listed. - -\subsection platform_blackboard The Blackboard - -The blackboard allows modules to communicate with each other and the UI. It has three main uses in Autopsy: -- Ingest modules can communicate with each other. For example, one module can calculate a MD5 hash of a file and post it to the blackboard. Then another module can retrieve the hash value from the blackboard and not need to calculate it again. -- The tree in the right-hand side of the UI uses the blackboard to populate its Results section. The bookmarks, hashset hits, etc. are all populated from Ingest modules that created blackboard entries. -- The report modules query the blackboard to identify what they should report on. -The blackboard is not unique to Autopsy. It is part of The Sleuth Kit datamodel and The Sleuth Kit Framework. In the name of reducing the amount of documentation that we need to maintain, we provide links here to those documentation sources. - -- Details on the blackboard concepts (artifacts versus attributes) can be found at http://sleuthkit.org/sleuthkit/docs/framework-docs/mod_bbpage.html. These documents are about the C++ implementation of the blackboard, but it is the same concepts. -- Details of the Java classes can be found in \ref jni_blackboard section of the The Sleuth Kit JNI documents (http://sleuthkit.org/sleuthkit/docs/jni-docs/). - - -\subsection mod_dev_other_services Framework Services - -Autopsy provides basic services to its modules. These were created to make it easier to write modules. Currently, the following -services are provided: - -- FileManager: the org.sleuthkit.autopsy.casemodule.services.FileManager service provides an API to access any file in the case. You can access FileManager by calling org.sleuthkit.autopsy.casemodule.services.Services.getFileManager(). Data Source-level Ingest modules and Report modules typically use this service because the other modules are passed in a reference to a specific file to do something with. -- org.sleuthkit.autopsy.coreutils.Logger - for adding log messages to central logger -- IngestModules also have a class that provides additional services. See \ref ingestmodule_services. -- MessageNotifyUtil.Notify.show() can be used to send messages to the user in the lower right-hand area. - - -\subsection mod_dev_other_utilities Framework Utilities - -In addition to the services previously listed, there are some general utilities that could be useful to modules. These include: -- org.sleuthkit.autopsy.coreutils.PlatformUtil - platform-specific methods to determine available disk space, memory, etc. -- org.sleuthkit.autopsy.coreutils.ModuleSettings - to persist module configuration and settings -- org.sleuthkit.autopsy.coreutils.FileUtil - to delete and add folders, etc. - -*/ +*/ diff --git a/docs/doxygen/services.dox b/docs/doxygen/services.dox new file mode 100644 index 0000000000..48b8039025 --- /dev/null +++ b/docs/doxygen/services.dox @@ -0,0 +1,37 @@ +/*! \page services_page Platform Services + + +\section platform_services Services +The platform provides a variety of services and utilities that you need to be familiar with. This section outlines the basic ones and additional ones are described at the end of the document in \ref mod_dev_adv. + +\subsection platform_blackboard The Blackboard + +The blackboard allows modules to communicate with each other and the UI. It has three main uses in Autopsy: +- Ingest modules can communicate with each other. For example, one module can calculate a MD5 hash of a file and post it to the blackboard. Then another module can retrieve the hash value from the blackboard and not need to calculate it again. +- The tree in the right-hand side of the UI uses the blackboard to populate its Results section. The bookmarks, hashset hits, etc. are all populated from Ingest modules that created blackboard entries. +- The report modules query the blackboard to identify what they should report on. + +The blackboard is not unique to Autopsy. It is part of The Sleuth Kit datamodel and The Sleuth Kit Framework. In the name of reducing the amount of documentation that we need to maintain, we provide links here to those documentation sources. + +- Details on the blackboard concepts (artifacts versus attributes) can be found at http://sleuthkit.org/sleuthkit/docs/framework-docs/mod_bbpage.html. These documents are about the C++ implementation of the blackboard, but it is the same concepts. +- Details of the Java classes can be found in \ref jni_blackboard section of the The Sleuth Kit JNI documents (http://sleuthkit.org/sleuthkit/docs/jni-docs/). + + +\subsection mod_dev_other_services Framework Services + +The followig are basic services that are available. + +- FileManager: the org.sleuthkit.autopsy.casemodule.services.FileManager service provides an API to access any file in the case. You can access FileManager by calling org.sleuthkit.autopsy.casemodule.services.Services.getFileManager(). Data Source-level Ingest modules and Report modules typically use this service because the other modules are passed in a reference to a specific file to do something with. +- org.sleuthkit.autopsy.coreutils.Logger - Use this class to log error and informational messages to the central Autopsy log file. +- If you have a background task that needs the provide the user with feedback, you can use the org.sleuthkit.autopsy.coreutils.MessageNotifyUtil.Notify.show() method to make a message in the lower right hand area. +- IngestModules also have a class that provides additional services. See \ref ingestmodule_services. + + +\subsection mod_dev_other_utilities Framework Utilities + +In addition to the services previously listed, there are some general utilities that could be useful to modules. These include: +- org.sleuthkit.autopsy.coreutils.PlatformUtil - platform-specific methods to determine available disk space, memory, etc. +- org.sleuthkit.autopsy.coreutils.ModuleSettings - to persist module configuration and settings +- org.sleuthkit.autopsy.coreutils.FileUtil - to delete and add folders, etc. + +*/ diff --git a/nbproject/platform.properties b/nbproject/platform.properties index e0bdd68b73..a9fa87f749 100644 --- a/nbproject/platform.properties +++ b/nbproject/platform.properties @@ -1,120 +1,120 @@ -branding.token=autopsy -netbeans-plat-version=7.3.1 -suite.dir=${basedir} -nbplatform.active.dir=${suite.dir}/netbeans-plat/${netbeans-plat-version} -harness.dir=${nbplatform.active.dir}/harness -bootstrap.url=http://deadlock.netbeans.org/hudson/job/nbms-and-javadoc/lastStableBuild/artifact/nbbuild/netbeans/harness/tasks.jar -autoupdate.catalog.url=http://dlc.sun.com.edgesuite.net/netbeans/updates/${netbeans-plat-version}/uc/final/distribution/catalog.xml.gz -cluster.path=\ - ${nbplatform.active.dir}/harness:\ - ${nbplatform.active.dir}/java:\ - ${nbplatform.active.dir}/platform -disabled.modules=\ - org.apache.tools.ant.module,\ - org.netbeans.api.debugger.jpda,\ - org.netbeans.api.java,\ - org.netbeans.lib.nbjavac,\ - org.netbeans.libs.cglib,\ - org.netbeans.libs.javacapi,\ - org.netbeans.libs.javacimpl,\ - org.netbeans.libs.springframework,\ - org.netbeans.modules.ant.browsetask,\ - org.netbeans.modules.ant.debugger,\ - org.netbeans.modules.ant.freeform,\ - org.netbeans.modules.ant.grammar,\ - org.netbeans.modules.ant.kit,\ - org.netbeans.modules.beans,\ - org.netbeans.modules.classfile,\ - org.netbeans.modules.dbschema,\ - org.netbeans.modules.debugger.jpda,\ - org.netbeans.modules.debugger.jpda.ant,\ - org.netbeans.modules.debugger.jpda.kit,\ - org.netbeans.modules.debugger.jpda.projects,\ - org.netbeans.modules.debugger.jpda.ui,\ - org.netbeans.modules.debugger.jpda.visual,\ - org.netbeans.modules.findbugs.installer,\ - org.netbeans.modules.form,\ - org.netbeans.modules.form.binding,\ - org.netbeans.modules.form.j2ee,\ - org.netbeans.modules.form.kit,\ - org.netbeans.modules.form.nb,\ - org.netbeans.modules.form.refactoring,\ - org.netbeans.modules.hibernate,\ - org.netbeans.modules.hibernatelib,\ - org.netbeans.modules.hudson.ant,\ - org.netbeans.modules.hudson.maven,\ - org.netbeans.modules.i18n,\ - org.netbeans.modules.i18n.form,\ - org.netbeans.modules.j2ee.core.utilities,\ - org.netbeans.modules.j2ee.eclipselink,\ - org.netbeans.modules.j2ee.eclipselinkmodelgen,\ - org.netbeans.modules.j2ee.jpa.refactoring,\ - org.netbeans.modules.j2ee.jpa.verification,\ - org.netbeans.modules.j2ee.metadata,\ - org.netbeans.modules.j2ee.metadata.model.support,\ - org.netbeans.modules.j2ee.persistence,\ - org.netbeans.modules.j2ee.persistence.kit,\ - org.netbeans.modules.j2ee.persistenceapi,\ - org.netbeans.modules.java.api.common,\ - org.netbeans.modules.java.debug,\ - org.netbeans.modules.java.editor,\ - org.netbeans.modules.java.editor.lib,\ - org.netbeans.modules.java.examples,\ - org.netbeans.modules.java.freeform,\ - org.netbeans.modules.java.guards,\ - org.netbeans.modules.java.helpset,\ - org.netbeans.modules.java.hints,\ - org.netbeans.modules.java.hints.declarative,\ - org.netbeans.modules.java.hints.declarative.test,\ - org.netbeans.modules.java.hints.legacy.spi,\ - org.netbeans.modules.java.hints.test,\ - org.netbeans.modules.java.hints.ui,\ - org.netbeans.modules.java.j2seplatform,\ - org.netbeans.modules.java.j2seproject,\ - org.netbeans.modules.java.kit,\ - org.netbeans.modules.java.lexer,\ - org.netbeans.modules.java.navigation,\ - org.netbeans.modules.java.platform,\ - org.netbeans.modules.java.preprocessorbridge,\ - org.netbeans.modules.java.project,\ - org.netbeans.modules.java.source,\ - org.netbeans.modules.java.source.ant,\ - org.netbeans.modules.java.source.queries,\ - org.netbeans.modules.java.source.queriesimpl,\ - org.netbeans.modules.java.sourceui,\ - org.netbeans.modules.java.testrunner,\ - org.netbeans.modules.javadoc,\ - org.netbeans.modules.javawebstart,\ - org.netbeans.modules.junit,\ - org.netbeans.modules.maven,\ - org.netbeans.modules.maven.checkstyle,\ - org.netbeans.modules.maven.coverage,\ - org.netbeans.modules.maven.embedder,\ - org.netbeans.modules.maven.grammar,\ - org.netbeans.modules.maven.graph,\ - org.netbeans.modules.maven.hints,\ - org.netbeans.modules.maven.indexer,\ - org.netbeans.modules.maven.junit,\ - org.netbeans.modules.maven.kit,\ - org.netbeans.modules.maven.model,\ - org.netbeans.modules.maven.osgi,\ - org.netbeans.modules.maven.persistence,\ - org.netbeans.modules.maven.refactoring,\ - org.netbeans.modules.maven.repository,\ - org.netbeans.modules.maven.search,\ - org.netbeans.modules.maven.spring,\ - org.netbeans.modules.projectimport.eclipse.core,\ - org.netbeans.modules.projectimport.eclipse.j2se,\ - org.netbeans.modules.refactoring.java,\ - org.netbeans.modules.spellchecker.bindings.java,\ - org.netbeans.modules.spring.beans,\ - org.netbeans.modules.testng,\ - org.netbeans.modules.testng.ant,\ - org.netbeans.modules.testng.maven,\ - org.netbeans.modules.websvc.jaxws21,\ - org.netbeans.modules.websvc.jaxws21api,\ - org.netbeans.modules.websvc.saas.codegen.java,\ - org.netbeans.modules.xml.jaxb,\ - org.netbeans.modules.xml.tools.java,\ - org.netbeans.spi.java.hints - +branding.token=autopsy +netbeans-plat-version=7.3.1 +suite.dir=${basedir} +nbplatform.active.dir=${suite.dir}/netbeans-plat/${netbeans-plat-version} +harness.dir=${nbplatform.active.dir}/harness +bootstrap.url=http://deadlock.netbeans.org/hudson/job/nbms-and-javadoc/lastStableBuild/artifact/nbbuild/netbeans/harness/tasks.jar +autoupdate.catalog.url=http://dlc.sun.com.edgesuite.net/netbeans/updates/${netbeans-plat-version}/uc/final/distribution/catalog.xml.gz +cluster.path=\ + ${nbplatform.active.dir}/harness:\ + ${nbplatform.active.dir}/java:\ + ${nbplatform.active.dir}/platform +disabled.modules=\ + org.apache.tools.ant.module,\ + org.netbeans.api.debugger.jpda,\ + org.netbeans.api.java,\ + org.netbeans.lib.nbjavac,\ + org.netbeans.libs.cglib,\ + org.netbeans.libs.javacapi,\ + org.netbeans.libs.javacimpl,\ + org.netbeans.libs.springframework,\ + org.netbeans.modules.ant.browsetask,\ + org.netbeans.modules.ant.debugger,\ + org.netbeans.modules.ant.freeform,\ + org.netbeans.modules.ant.grammar,\ + org.netbeans.modules.ant.kit,\ + org.netbeans.modules.beans,\ + org.netbeans.modules.classfile,\ + org.netbeans.modules.dbschema,\ + org.netbeans.modules.debugger.jpda,\ + org.netbeans.modules.debugger.jpda.ant,\ + org.netbeans.modules.debugger.jpda.kit,\ + org.netbeans.modules.debugger.jpda.projects,\ + org.netbeans.modules.debugger.jpda.ui,\ + org.netbeans.modules.debugger.jpda.visual,\ + org.netbeans.modules.findbugs.installer,\ + org.netbeans.modules.form,\ + org.netbeans.modules.form.binding,\ + org.netbeans.modules.form.j2ee,\ + org.netbeans.modules.form.kit,\ + org.netbeans.modules.form.nb,\ + org.netbeans.modules.form.refactoring,\ + org.netbeans.modules.hibernate,\ + org.netbeans.modules.hibernatelib,\ + org.netbeans.modules.hudson.ant,\ + org.netbeans.modules.hudson.maven,\ + org.netbeans.modules.i18n,\ + org.netbeans.modules.i18n.form,\ + org.netbeans.modules.j2ee.core.utilities,\ + org.netbeans.modules.j2ee.eclipselink,\ + org.netbeans.modules.j2ee.eclipselinkmodelgen,\ + org.netbeans.modules.j2ee.jpa.refactoring,\ + org.netbeans.modules.j2ee.jpa.verification,\ + org.netbeans.modules.j2ee.metadata,\ + org.netbeans.modules.j2ee.metadata.model.support,\ + org.netbeans.modules.j2ee.persistence,\ + org.netbeans.modules.j2ee.persistence.kit,\ + org.netbeans.modules.j2ee.persistenceapi,\ + org.netbeans.modules.java.api.common,\ + org.netbeans.modules.java.debug,\ + org.netbeans.modules.java.editor,\ + org.netbeans.modules.java.editor.lib,\ + org.netbeans.modules.java.examples,\ + org.netbeans.modules.java.freeform,\ + org.netbeans.modules.java.guards,\ + org.netbeans.modules.java.helpset,\ + org.netbeans.modules.java.hints,\ + org.netbeans.modules.java.hints.declarative,\ + org.netbeans.modules.java.hints.declarative.test,\ + org.netbeans.modules.java.hints.legacy.spi,\ + org.netbeans.modules.java.hints.test,\ + org.netbeans.modules.java.hints.ui,\ + org.netbeans.modules.java.j2seplatform,\ + org.netbeans.modules.java.j2seproject,\ + org.netbeans.modules.java.kit,\ + org.netbeans.modules.java.lexer,\ + org.netbeans.modules.java.navigation,\ + org.netbeans.modules.java.platform,\ + org.netbeans.modules.java.preprocessorbridge,\ + org.netbeans.modules.java.project,\ + org.netbeans.modules.java.source,\ + org.netbeans.modules.java.source.ant,\ + org.netbeans.modules.java.source.queries,\ + org.netbeans.modules.java.source.queriesimpl,\ + org.netbeans.modules.java.sourceui,\ + org.netbeans.modules.java.testrunner,\ + org.netbeans.modules.javadoc,\ + org.netbeans.modules.javawebstart,\ + org.netbeans.modules.junit,\ + org.netbeans.modules.maven,\ + org.netbeans.modules.maven.checkstyle,\ + org.netbeans.modules.maven.coverage,\ + org.netbeans.modules.maven.embedder,\ + org.netbeans.modules.maven.grammar,\ + org.netbeans.modules.maven.graph,\ + org.netbeans.modules.maven.hints,\ + org.netbeans.modules.maven.indexer,\ + org.netbeans.modules.maven.junit,\ + org.netbeans.modules.maven.kit,\ + org.netbeans.modules.maven.model,\ + org.netbeans.modules.maven.osgi,\ + org.netbeans.modules.maven.persistence,\ + org.netbeans.modules.maven.refactoring,\ + org.netbeans.modules.maven.repository,\ + org.netbeans.modules.maven.search,\ + org.netbeans.modules.maven.spring,\ + org.netbeans.modules.projectimport.eclipse.core,\ + org.netbeans.modules.projectimport.eclipse.j2se,\ + org.netbeans.modules.refactoring.java,\ + org.netbeans.modules.spellchecker.bindings.java,\ + org.netbeans.modules.spring.beans,\ + org.netbeans.modules.testng,\ + org.netbeans.modules.testng.ant,\ + org.netbeans.modules.testng.maven,\ + org.netbeans.modules.websvc.jaxws21,\ + org.netbeans.modules.websvc.jaxws21api,\ + org.netbeans.modules.websvc.saas.codegen.java,\ + org.netbeans.modules.xml.jaxb,\ + org.netbeans.modules.xml.tools.java,\ + org.netbeans.spi.java.hints + diff --git a/nbproject/project.properties b/nbproject/project.properties index e70fffba4b..6446b47619 100644 --- a/nbproject/project.properties +++ b/nbproject/project.properties @@ -4,7 +4,7 @@ app.title=Autopsy ### lowercase version of above app.name=autopsy ### if left unset, version will default to today's date -app.version=3.0.7 +app.version=3.0.8 ### Build type isn't used at this point, but it may be useful ### Must be one of: DEVELOPMENT, RELEASE build.type=RELEASE diff --git a/test/README.txt b/test/README.txt index d0064b4f95..854f5e1a33 100644 --- a/test/README.txt +++ b/test/README.txt @@ -1,13 +1,13 @@ -This folder contains the data and scripts required to run regression tests -for Autopsy. There is a 'Testing' folder in the root directory that contains -the Java code that drives Autopsy to perform the tests. - -To run these tests: -- You will need python3. We run this from within Cygwin. -- Download the input images by typing 'ant test-download-imgs' in the root Autopsy folder. - This will place images in 'test/input'. -- Run 'python3 regression.py' from inside of the 'test/scripts' folder. -- Alternatively, run 'python3 regression.py -l [CONFIGFILE] to run the tests on a specified - list of images using a configuration file. See config.xml in the 'test/scripts' folder to - see configuration file formatting. -- Run 'python3 regression.py -h' to see other options. +This folder contains the data and scripts required to run regression tests +for Autopsy. There is a 'Testing' folder in the root directory that contains +the Java code that drives Autopsy to perform the tests. + +To run these tests: +- You will need python3. We run this from within Cygwin. +- Download the input images by typing 'ant test-download-imgs' in the root Autopsy folder. + This will place images in 'test/input'. +- Run 'python3 regression.py' from inside of the 'test/scripts' folder. +- Alternatively, run 'python3 regression.py -l [CONFIGFILE] to run the tests on a specified + list of images using a configuration file. See config.xml in the 'test/scripts' folder to + see configuration file formatting. +- Run 'python3 regression.py -h' to see other options. diff --git a/test/script/Emailer.py b/test/script/Emailer.py index 5d12e6afa3..7e661e12ea 100644 --- a/test/script/Emailer.py +++ b/test/script/Emailer.py @@ -1,49 +1,49 @@ -import smtplib -from email.mime.image import MIMEImage -from email.mime.multipart import MIMEMultipart -from email.mime.text import MIMEText -from email.mime.base import MIMEBase -from email import encoders -import xml -from xml.dom.minidom import parse, parseString - -def send_email(to, server, subj, body, attachments): - """Send an email with the given information. - - Args: - to: a String, the email address to send the email to - server: a String, the mail server to send from - subj: a String, the subject line of the message - body: a String, the body of the message - attachments: a listof_pathto_File, the attachements to include - """ - msg = MIMEMultipart() - msg['Subject'] = subj - # me == the sender's email address - # family = the list of all recipients' email addresses - msg['From'] = 'AutopsyTest' - msg['To'] = to - msg.preamble = 'This is a test' - container = MIMEText(body, 'plain') - msg.attach(container) - Build_email(msg, attachments) - s = smtplib.SMTP(server) - try: - print('Sending Email') - s.sendmail(msg['From'], msg['To'], msg.as_string()) - except Exception as e: - print(str(e)) - s.quit() - -def Build_email(msg, attachments): - for file in attachments: - part = MIMEBase('application', "octet-stream") - atach = open(file, "rb") - attch = atach.read() - noml = file.split("\\") - nom = noml[len(noml)-1] - part.set_payload(attch) - encoders.encode_base64(part) - part.add_header('Content-Disposition', 'attachment; filename="' + nom + '"') - msg.attach(part) - +import smtplib +from email.mime.image import MIMEImage +from email.mime.multipart import MIMEMultipart +from email.mime.text import MIMEText +from email.mime.base import MIMEBase +from email import encoders +import xml +from xml.dom.minidom import parse, parseString + +def send_email(to, server, subj, body, attachments): + """Send an email with the given information. + + Args: + to: a String, the email address to send the email to + server: a String, the mail server to send from + subj: a String, the subject line of the message + body: a String, the body of the message + attachments: a listof_pathto_File, the attachements to include + """ + msg = MIMEMultipart() + msg['Subject'] = subj + # me == the sender's email address + # family = the list of all recipients' email addresses + msg['From'] = 'AutopsyTest' + msg['To'] = to + msg.preamble = 'This is a test' + container = MIMEText(body, 'plain') + msg.attach(container) + Build_email(msg, attachments) + s = smtplib.SMTP(server) + try: + print('Sending Email') + s.sendmail(msg['From'], msg['To'], msg.as_string()) + except Exception as e: + print(str(e)) + s.quit() + +def Build_email(msg, attachments): + for file in attachments: + part = MIMEBase('application', "octet-stream") + atach = open(file, "rb") + attch = atach.read() + noml = file.split("\\") + nom = noml[len(noml)-1] + part.set_payload(attch) + encoders.encode_base64(part) + part.add_header('Content-Disposition', 'attachment; filename="' + nom + '"') + msg.attach(part) + diff --git a/test/script/regression.py b/test/script/regression.py index b2ad319963..6c640823ed 100644 --- a/test/script/regression.py +++ b/test/script/regression.py @@ -1,1854 +1,1854 @@ -#!/usr/bin/python -# -*- coding: utf_8 -*- - - # Autopsy Forensic Browser - # - # Copyright 2013 Basis Technology Corp. - # - # Licensed under the Apache License, Version 2.0 (the "License"); - # you may not use this file except in compliance with the License. - # You may obtain a copy of the License at - # - # http://www.apache.org/licenses/LICENSE-2.0 - # - # Unless required by applicable law or agreed to in writing, software - # distributed under the License is distributed on an "AS IS" BASIS, - # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - # See the License for the specific language governing permissions and - # limitations under the License. -from tskdbdiff import TskDbDiff, TskDbDiffException -import codecs -import datetime -import logging -import os -import re -import shutil -import socket -import sqlite3 -import subprocess -import sys -from sys import platform as _platform -import time -import traceback -import xml -from time import localtime, strftime -from xml.dom.minidom import parse, parseString -import smtplib -from email.mime.image import MIMEImage -from email.mime.multipart import MIMEMultipart -from email.mime.text import MIMEText -import re -import zipfile -import zlib -import Emailer -import srcupdater -from regression_utils import * - -# -# Please read me... -# -# This is the regression testing Python script. -# It uses an ant command to run build.xml for RegressionTest.java -# -# The code is cleanly sectioned and commented. -# Please follow the current formatting. -# It is a long and potentially confusing script. -# -# Variable, function, and class names are written in Python conventions: -# this_is_a_variable this_is_a_function() ThisIsAClass -# -# - - -# Data Definitions: -# -# pathto_X: A path to type X. -# ConfigFile: An XML file formatted according to the template in myconfig.xml -# ParsedConfig: A dom object that represents a ConfigFile -# SQLCursor: A cursor recieved from a connection to an SQL database -# Nat: A Natural Number -# Image: An image -# - -# Enumeration of database types used for the simplification of generating database paths -DBType = enum('OUTPUT', 'GOLD', 'BACKUP') - -# Common filename of the output and gold databases (although they are in different directories -DB_FILENAME = "autopsy.db" - -# Backup database filename -BACKUP_DB_FILENAME = "autopsy_backup.db" - -# TODO: Double check this purpose statement -# Folder name for gold standard database testing -AUTOPSY_TEST_CASE = "AutopsyTestCase" - -# TODO: Double check this purpose statement -# The filename of the log to store error messages -COMMON_LOG = "AutopsyErrors.txt" - -Day = 0 - -#----------------------# -# Main # -#----------------------# -def main(): - """Parse the command-line arguments, create the configuration, and run the tests.""" - args = Args() - parse_result = args.parse() - test_config = TestConfiguration(args) - # The arguments were given wrong: - if not parse_result: - return - if(not args.fr): - antin = ["ant"] - antin.append("-f") - antin.append(os.path.join("..","..","build.xml")) - antin.append("test-download-imgs") - if SYS is OS.CYGWIN: - subprocess.call(antin) - elif SYS is OS.WIN: - theproc = subprocess.Popen(antin, shell = True, stdout=subprocess.PIPE) - theproc.communicate() - # Otherwise test away! - TestRunner.run_tests(test_config) - - -class TestRunner(object): - """A collection of functions to run the regression tests.""" - - def run_tests(test_config): - """Run the tests specified by the main TestConfiguration. - - Executes the AutopsyIngest for each image and dispatches the results based on - the mode (rebuild or testing) - """ - test_data_list = [ TestData(image, test_config) for image in test_config.images ] - - Reports.html_add_images(test_config.html_log, test_config.images) - - logres =[] - for test_data in test_data_list: - Errors.clear_print_logs() - Errors.set_testing_phase(test_data.image) - if not (test_config.args.rebuild or os.path.exists(test_data.gold_archive)): - msg = "Gold standard doesn't exist, skipping image:" - Errors.print_error(msg) - Errors.print_error(test_data.gold_archive) - continue - TestRunner._run_autopsy_ingest(test_data) - - if test_config.args.rebuild: - TestRunner.rebuild(test_data) - else: - logres.append(TestRunner._run_test(test_data)) - test_data.printout = Errors.printout - test_data.printerror = Errors.printerror - - Reports.write_html_foot(test_config.html_log) - # TODO: move this elsewhere - if (len(logres)>0): - for lm in logres: - for ln in lm: - Errors.add_email_msg(ln) - - # TODO: possibly worth putting this in a sub method - if all([ test_data.overall_passed for test_data in test_data_list ]): - Errors.add_email_msg("All images passed.\n") - else: - msg = "The following images failed:\n" - for test_data in test_data_list: - if not test_data.overall_passed: - msg += "\t" + test_data.image + "\n" - Errors.add_email_msg(msg) - html = open(test_config.html_log) - Errors.add_email_attachment(html.name) - html.close() - - if test_config.email_enabled: - Emailer.send_email(test_config.mail_to, test_config.mail_server, - test_config.mail_subject, Errors.email_body, Errors.email_attachs) - - def _run_autopsy_ingest(test_data): - """Run Autopsy ingest for the image in the given TestData. - - Also generates the necessary logs for rebuilding or diff. - - Args: - test_data: the TestData to run the ingest on. - """ - if image_type(test_data.image_file) == IMGTYPE.UNKNOWN: - Errors.print_error("Error: Image type is unrecognized:") - Errors.print_error(test_data.image_file + "\n") - return - - logging.debug("--------------------") - logging.debug(test_data.image_name) - logging.debug("--------------------") - TestRunner._run_ant(test_data) - time.sleep(2) # Give everything a second to process - - try: - # Dump the database before we diff or use it for rebuild - TskDbDiff.dump_output_db(test_data.get_db_path(DBType.OUTPUT), test_data.get_db_dump_path(DBType.OUTPUT), - test_data.get_sorted_data_path(DBType.OUTPUT)) - except sqlite3.OperationalError as e: - print("Ingest did not run properly.", - "Make sure no other instances of Autopsy are open and try again.") - sys.exit() - - # merges logs into a single log for later diff / rebuild - copy_logs(test_data) - Logs.generate_log_data(test_data) - - TestRunner._handle_solr(test_data) - TestRunner._handle_exception(test_data) - - #TODO: figure out return type of _run_test (logres) - def _run_test(test_data): - """Compare the results of the output to the gold standard. - - Args: - test_data: the TestData - - Returns: - logres? - """ - TestRunner._extract_gold(test_data) - - # Look for core exceptions - # @@@ Should be moved to TestResultsDiffer, but it didn't know about logres -- need to look into that - logres = Logs.search_common_log("TskCoreException", test_data) - - TestResultsDiffer.run_diff(test_data) - test_data.overall_passed = (test_data.html_report_passed and - test_data.errors_diff_passed and test_data.db_diff_passed) - - Reports.generate_reports(test_data) - if(not test_data.overall_passed): - Errors.add_email_attachment(test_data.common_log_path) - return logres - - def _extract_gold(test_data): - """Extract gold archive file to output/gold/tmp/ - - Args: - test_data: the TestData - """ - extrctr = zipfile.ZipFile(test_data.gold_archive, 'r', compression=zipfile.ZIP_DEFLATED) - extrctr.extractall(test_data.main_config.gold) - extrctr.close - time.sleep(2) - - def _handle_solr(test_data): - """Clean up SOLR index if in keep mode (-k). - - Args: - test_data: the TestData - """ - if not test_data.main_config.args.keep: - if clear_dir(test_data.solr_index): - print_report([], "DELETE SOLR INDEX", "Solr index deleted.") - else: - print_report([], "KEEP SOLR INDEX", "Solr index has been kept.") - - def _handle_exception(test_data): - """If running in exception mode, print exceptions to log. - - Args: - test_data: the TestData - """ - if test_data.main_config.args.exception: - exceptions = search_logs(test_data.main_config.args.exception_string, test_data) - okay = ("No warnings or exceptions found containing text '" + - test_data.main_config.args.exception_string + "'.") - print_report(exceptions, "EXCEPTION", okay) - - def rebuild(test_data): - """Rebuild the gold standard with the given TestData. - - Copies the test-generated database and html report files into the gold directory. - """ - test_config = test_data.main_config - # Errors to print - errors = [] - # Delete the current gold standards - gold_dir = test_config.img_gold - clear_dir(test_config.img_gold) - tmpdir = make_path(gold_dir, test_data.image_name) - dbinpth = test_data.get_db_path(DBType.OUTPUT) - dboutpth = make_path(tmpdir, DB_FILENAME) - dataoutpth = make_path(tmpdir, test_data.image_name + "SortedData.txt") - dbdumpinpth = test_data.get_db_dump_path(DBType.OUTPUT) - dbdumpoutpth = make_path(tmpdir, test_data.image_name + "DBDump.txt") - if not os.path.exists(test_config.img_gold): - os.makedirs(test_config.img_gold) - if not os.path.exists(tmpdir): - os.makedirs(tmpdir) - try: - shutil.copy(dbinpth, dboutpth) - if file_exists(test_data.get_sorted_data_path(DBType.OUTPUT)): - shutil.copy(test_data.get_sorted_data_path(DBType.OUTPUT), dataoutpth) - shutil.copy(dbdumpinpth, dbdumpoutpth) - error_pth = make_path(tmpdir, test_data.image_name+"SortedErrors.txt") - shutil.copy(test_data.sorted_log, error_pth) - except IOError as e: - Errors.print_error(str(e)) - Errors.add_email_message("Not rebuilt properly") - print(str(e)) - print(traceback.format_exc()) - # Rebuild the HTML report - output_html_report_dir = test_data.get_html_report_path(DBType.OUTPUT) - gold_html_report_dir = make_path(tmpdir, "Report") - - try: - shutil.copytree(output_html_report_dir, gold_html_report_dir) - except OSError as e: - errors.append(e.error()) - except Exception as e: - errors.append("Error: Unknown fatal error when rebuilding the gold html report.") - errors.append(str(e) + "\n") - print(traceback.format_exc()) - oldcwd = os.getcwd() - zpdir = gold_dir - os.chdir(zpdir) - os.chdir("..") - img_gold = "tmp" - img_archive = make_path(test_data.image_name+"-archive.zip") - comprssr = zipfile.ZipFile(img_archive, 'w',compression=zipfile.ZIP_DEFLATED) - TestRunner.zipdir(img_gold, comprssr) - comprssr.close() - os.chdir(oldcwd) - del_dir(test_config.img_gold) - okay = "Sucessfully rebuilt all gold standards." - print_report(errors, "REBUILDING", okay) - - def zipdir(path, zip): - for root, dirs, files in os.walk(path): - for file in files: - zip.write(os.path.join(root, file)) - - def _run_ant(test_data): - """Construct and run the ant build command for the given TestData. - - Tests Autopsy by calling RegressionTest.java via the ant build file. - - Args: - test_data: the TestData - """ - test_config = test_data.main_config - # Set up the directories - if dir_exists(test_data.output_path): - shutil.rmtree(test_data.output_path) - os.makedirs(test_data.output_path) - test_data.ant = ["ant"] - test_data.ant.append("-v") - test_data.ant.append("-f") - # case.ant.append(case.build_path) - test_data.ant.append(os.path.join("..","..","Testing","build.xml")) - test_data.ant.append("regression-test") - test_data.ant.append("-l") - test_data.ant.append(test_data.antlog_dir) - test_data.ant.append("-Dimg_path=" + test_data.image_file) - test_data.ant.append("-Dknown_bad_path=" + test_config.known_bad_path) - test_data.ant.append("-Dkeyword_path=" + test_config.keyword_path) - test_data.ant.append("-Dnsrl_path=" + test_config.nsrl_path) - test_data.ant.append("-Dgold_path=" + test_config.gold) - test_data.ant.append("-Dout_path=" + - make_local_path(test_data.output_path)) - test_data.ant.append("-Dignore_unalloc=" + "%s" % test_config.args.unallocated) - test_data.ant.append("-Dtest.timeout=" + str(test_config.timeout)) - - Errors.print_out("Ingesting Image:\n" + test_data.image_file + "\n") - Errors.print_out("CMD: " + " ".join(test_data.ant)) - Errors.print_out("Starting test...\n") - antoutpth = make_local_path(test_data.main_config.output_dir, "antRunOutput.txt") - antout = open(antoutpth, "a") - if SYS is OS.CYGWIN: - subprocess.call(test_data.ant, stdout=subprocess.PIPE) - elif SYS is OS.WIN: - theproc = subprocess.Popen(test_data.ant, shell = True, stdout=subprocess.PIPE) - theproc.communicate() - antout.close() - - -class TestData(object): - """Container for the input and output of a single image. - - Represents data for the test of a single image, including path to the image, - database paths, etc. - - Attributes: - main_config: the global TestConfiguration - ant: a listof_String, the ant command for this TestData - image_file: a pathto_Image, the image for this TestData - image: a String, the image file's name - image_name: a String, the image file's name with a trailing (0) - output_path: pathto_Dir, the output directory for this TestData - autopsy_data_file: a pathto_File, the IMAGE_NAMEAutopsy_data.txt file - warning_log: a pathto_File, the AutopsyLogs.txt file - antlog_dir: a pathto_File, the antlog.txt file - test_dbdump: a pathto_File, the database dump, IMAGENAMEDump.txt - common_log_path: a pathto_File, the IMAGE_NAMECOMMON_LOG file - sorted_log: a pathto_File, the IMAGENAMESortedErrors.txt file - reports_dir: a pathto_Dir, the AutopsyTestCase/Reports folder - gold_data_dir: a pathto_Dir, the gold standard directory - gold_archive: a pathto_File, the gold standard archive - logs_dir: a pathto_Dir, the location where autopsy logs are stored - solr_index: a pathto_Dir, the locatino of the solr index - html_report_passed: a boolean, did the HTML report diff pass? - errors_diff_passed: a boolean, did the error diff pass? - db_diff_passed: a boolean, did the db diff pass? - overall_passed: a boolean, did the test pass? - total_test_time: a String representation of the test duration - start_date: a String representation of this TestData's start date - end_date: a String representation of the TestData's end date - total_ingest_time: a String representation of the total ingest time - artifact_count: a Nat, the number of artifacts - artifact_fail: a Nat, the number of artifact failures - heap_space: a String representation of TODO - service_times: a String representation of TODO - autopsy_version: a String, the version of autopsy that was run - ingest_messages: a Nat, the number of ingest messages - indexed_files: a Nat, the number of files indexed during the ingest - indexed_chunks: a Nat, the number of chunks indexed during the ingest - printerror: a listof_String, the error messages printed during this TestData's test - printout: a listof_String, the messages pritned during this TestData's test - """ - - def __init__(self, image, main_config): - """Init this TestData with it's image and the test configuration. - - Args: - image: the Image to be tested. - main_config: the global TestConfiguration. - """ - # Configuration Data - self.main_config = main_config - self.ant = [] - self.image_file = str(image) - # TODO: This 0 should be be refactored out, but it will require rebuilding and changing of outputs. - self.image = get_image_name(self.image_file) - self.image_name = self.image + "(0)" - # Directory structure and files - self.output_path = make_path(self.main_config.output_dir, self.image_name) - self.autopsy_data_file = make_path(self.output_path, self.image_name + "Autopsy_data.txt") - self.warning_log = make_local_path(self.output_path, "AutopsyLogs.txt") - self.antlog_dir = make_local_path(self.output_path, "antlog.txt") - self.test_dbdump = make_path(self.output_path, self.image_name + - "DBDump.txt") - self.common_log_path = make_local_path(self.output_path, self.image_name + COMMON_LOG) - self.sorted_log = make_local_path(self.output_path, self.image_name + "SortedErrors.txt") - self.reports_dir = make_path(self.output_path, AUTOPSY_TEST_CASE, "Reports") - self.gold_data_dir = make_path(self.main_config.img_gold, self.image_name) - self.gold_archive = make_path(self.main_config.gold, - self.image_name + "-archive.zip") - self.logs_dir = make_path(self.output_path, "logs") - self.solr_index = make_path(self.output_path, AUTOPSY_TEST_CASE, - "ModuleOutput", "KeywordSearch") - # Results and Info - self.html_report_passed = False - self.errors_diff_passed = False - self.db_diff_passed = False - self.overall_passed = False - # Ingest info - self.total_test_time = "" - self.start_date = "" - self.end_date = "" - self.total_ingest_time = "" - self.artifact_count = 0 - self.artifact_fail = 0 - self.heap_space = "" - self.service_times = "" - self.autopsy_version = "" - self.ingest_messages = 0 - self.indexed_files = 0 - self.indexed_chunks = 0 - # Error tracking - self.printerror = [] - self.printout = [] - - def ant_to_string(self): - string = "" - for arg in self.ant: - string += (arg + " ") - return string - - def get_db_path(self, db_type): - """Get the path to the database file that corresponds to the given DBType. - - Args: - DBType: the DBType of the path to be generated. - """ - if(db_type == DBType.GOLD): - db_path = make_path(self.gold_data_dir, DB_FILENAME) - elif(db_type == DBType.OUTPUT): - db_path = make_path(self.main_config.output_dir, self.image_name, AUTOPSY_TEST_CASE, DB_FILENAME) - else: - db_path = make_path(self.main_config.output_dir, self.image_name, AUTOPSY_TEST_CASE, BACKUP_DB_FILENAME) - return db_path - - def get_html_report_path(self, html_type): - """Get the path to the HTML Report folder that corresponds to the given DBType. - - Args: - DBType: the DBType of the path to be generated. - """ - if(html_type == DBType.GOLD): - return make_path(self.gold_data_dir, "Report") - else: - # Autopsy creates an HTML report folder in the form AutopsyTestCase DATE-TIME - # It's impossible to get the exact time the folder was created, but the folder - # we are looking for is the only one in the self.reports_dir folder - html_path = "" - for fs in os.listdir(self.reports_dir): - html_path = make_path(self.reports_dir, fs) - if os.path.isdir(html_path): - break - return make_path(html_path, os.listdir(html_path)[0]) - - def get_sorted_data_path(self, file_type): - """Get the path to the SortedData file that corresponds to the given DBType. - - Args: - file_type: the DBType of the path to be generated - """ - return self._get_path_to_file(file_type, "SortedData.txt") - - def get_sorted_errors_path(self, file_type): - """Get the path to the SortedErrors file that correspodns to the given - DBType. - - Args: - file_type: the DBType of the path to be generated - """ - return self._get_path_to_file(file_type, "SortedErrors.txt") - - def get_db_dump_path(self, file_type): - """Get the path to the DBDump file that corresponds to the given DBType. - - Args: - file_type: the DBType of the path to be generated - """ - return self._get_path_to_file(file_type, "DBDump.txt") - - def _get_path_to_file(self, file_type, file_name): - """Get the path to the specified file with the specified type. - - Args: - file_type: the DBType of the path to be generated - file_name: a String, the filename of the path to be generated - """ - full_filename = self.image_name + file_name - if(file_type == DBType.GOLD): - return make_path(self.gold_data_dir, full_filename) - else: - return make_path(self.output_path, full_filename) - - -class TestConfiguration(object): - """Container for test configuration data. - - The Master Test Configuration. Encapsulates consolidated high level input from - config XML file and command-line arguments. - - Attributes: - args: an Args, the command line arguments - output_dir: a pathto_Dir, the output directory - input_dir: a pathto_Dir, the input directory - gold: a pathto_Dir, the gold directory - img_gold: a pathto_Dir, the temp directory where gold images are unzipped to - csv: a pathto_File, the local csv file - global_csv: a pathto_File, the global csv file - html_log: a pathto_File - known_bad_path: - keyword_path: - nsrl_path: - build_path: a pathto_File, the ant build file which runs the tests - autopsy_version: - ingest_messages: a Nat, number of ingest messages - indexed_files: a Nat, the number of indexed files - indexed_chunks: a Nat, the number of indexed chunks - timer: - images: a listof_Image, the images to be tested - timeout: a Nat, the amount of time before killing the test - ant: a listof_String, the ant command to run the tests - """ - - def __init__(self, args): - """Inits TestConfiguration and loads a config file if available. - - Args: - args: an Args, the command line arguments. - """ - self.args = args - # Paths: - self.output_dir = "" - self.input_dir = make_local_path("..","input") - self.gold = make_path("..", "output", "gold") - self.img_gold = make_path(self.gold, 'tmp') - # Logs: - self.csv = "" - self.global_csv = "" - self.html_log = "" - # Ant info: - self.known_bad_path = make_path(self.input_dir, "notablehashes.txt-md5.idx") - self.keyword_path = make_path(self.input_dir, "notablekeywords.xml") - self.nsrl_path = make_path(self.input_dir, "nsrl.txt-md5.idx") - self.build_path = make_path("..", "build.xml") - # Infinite Testing info - timer = 0 - self.images = [] - # Email info - self.email_enabled = args.email_enabled - self.mail_server = "" - self.mail_to = "" - self.mail_subject = "" - # Set the timeout to something huge - # The entire tester should not timeout before this number in ms - # However it only seems to take about half this time - # And it's very buggy, so we're being careful - self.timeout = 24 * 60 * 60 * 1000 * 1000 - - if not self.args.single: - self._load_config_file(self.args.config_file) - else: - self.images.append(self.args.single_file) - self._init_logs() - #self._init_imgs() - #self._init_build_info() - - - def _load_config_file(self, config_file): - """Updates this TestConfiguration's attributes from the config file. - - Initializes this TestConfiguration by iterating through the XML config file - command-line argument. Populates self.images and optional email configuration - - Args: - config_file: ConfigFile - the configuration file to load - """ - try: - count = 0 - parsed_config = parse(config_file) - logres = [] - counts = {} - if parsed_config.getElementsByTagName("indir"): - self.input_dir = parsed_config.getElementsByTagName("indir")[0].getAttribute("value").encode().decode("utf_8") - if parsed_config.getElementsByTagName("global_csv"): - self.global_csv = parsed_config.getElementsByTagName("global_csv")[0].getAttribute("value").encode().decode("utf_8") - self.global_csv = make_local_path(self.global_csv) - if parsed_config.getElementsByTagName("golddir"): - self.gold = parsed_config.getElementsByTagName("golddir")[0].getAttribute("value").encode().decode("utf_8") - self.img_gold = make_path(self.gold, 'tmp') - - self._init_imgs(parsed_config) - self._init_build_info(parsed_config) - self._init_email_info(parsed_config) - - except IOError as e: - msg = "There was an error loading the configuration file.\n" - msg += "\t" + str(e) - Errors.add_email_msg(msg) - logging.critical(traceback.format_exc()) - print(traceback.format_exc()) - - def _init_logs(self): - """Setup output folder, logs, and reporting infrastructure.""" - if(not dir_exists(make_path("..", "output", "results"))): - os.makedirs(make_path("..", "output", "results",)) - self.output_dir = make_path("..", "output", "results", time.strftime("%Y.%m.%d-%H.%M.%S")) - os.makedirs(self.output_dir) - self.csv = make_local_path(self.output_dir, "CSV.txt") - self.html_log = make_path(self.output_dir, "AutopsyTestCase.html") - log_name = self.output_dir + "\\regression.log" - logging.basicConfig(filename=log_name, level=logging.DEBUG) - - def _init_build_info(self, parsed_config): - """Initializes paths that point to information necessary to run the AutopsyIngest.""" - build_elements = parsed_config.getElementsByTagName("build") - if build_elements: - build_element = build_elements[0] - build_path = build_element.getAttribute("value").encode().decode("utf_8") - self.build_path = build_path - - def _init_imgs(self, parsed_config): - """Initialize the list of images to run tests on.""" - for element in parsed_config.getElementsByTagName("image"): - value = element.getAttribute("value").encode().decode("utf_8") - print ("Image in Config File: " + value) - if file_exists(value): - self.images.append(value) - else: - msg = "File: " + value + " doesn't exist" - Errors.print_error(msg) - Errors.add_email_msg(msg) - image_count = len(self.images) - - # Sanity check to see if there are obvious gold images that we are not testing - gold_count = 0 - for file in os.listdir(self.gold): - if not(file == 'tmp'): - gold_count+=1 - - if (image_count > gold_count): - print("******Alert: There are more input images than gold standards, some images will not be properly tested.\n") - elif (image_count < gold_count): - print("******Alert: There are more gold standards than input images, this will not check all gold Standards.\n") - - def _init_email_info(self, parsed_config): - """Initializes email information dictionary""" - email_elements = parsed_config.getElementsByTagName("email") - if email_elements: - mail_to = email_elements[0] - self.mail_to = mail_to.getAttribute("value").encode().decode("utf_8") - mail_server_elements = parsed_config.getElementsByTagName("mail_server") - if mail_server_elements: - mail_from = mail_server_elements[0] - self.mail_server = mail_from.getAttribute("value").encode().decode("utf_8") - subject_elements = parsed_config.getElementsByTagName("subject") - if subject_elements: - subject = subject_elements[0] - self.mail_subject = subject.getAttribute("value").encode().decode("utf_8") - if self.mail_server and self.mail_to and self.args.email_enabled: - self.email_enabled = True - print("Email will be sent to ", self.mail_to) - else: - print("No email will be sent.") - - -#-------------------------------------------------# -# Functions relating to comparing outputs # -#-------------------------------------------------# -class TestResultsDiffer(object): - """Compares results for a single test.""" - - def run_diff(test_data): - """Compares results for a single test. - - Args: - test_data: the TestData to use. - databaseDiff: TskDbDiff object created based off test_data - """ - try: - output_db = test_data.get_db_path(DBType.OUTPUT) - gold_db = test_data.get_db_path(DBType.GOLD) - output_dir = test_data.output_path - gold_bb_dump = test_data.get_sorted_data_path(DBType.GOLD) - gold_dump = test_data.get_db_dump_path(DBType.GOLD) - test_data.db_diff_pass = all(TskDbDiff(output_db, gold_db, output_dir=output_dir, gold_bb_dump=gold_bb_dump, - gold_dump=gold_dump).run_diff()) - - # Compare Exceptions - # replace is a fucntion that replaces strings of digits with 'd' - # this is needed so dates and times will not cause the diff to fail - replace = lambda file: re.sub(re.compile("\d"), "d", file) - output_errors = test_data.get_sorted_errors_path(DBType.OUTPUT) - gold_errors = test_data.get_sorted_errors_path(DBType.GOLD) - passed = TestResultsDiffer._compare_text(output_errors, gold_errors, - replace) - test_data.errors_diff_passed = passed - - # Compare html output - gold_report_path = test_data.get_html_report_path(DBType.GOLD) - output_report_path = test_data.get_html_report_path(DBType.OUTPUT) - passed = TestResultsDiffer._html_report_diff(gold_report_path, - output_report_path) - test_data.html_report_passed = passed - - # Clean up tmp folder - del_dir(test_data.gold_data_dir) - - except sqlite3.OperationalError as e: - Errors.print_error("Tests failed while running the diff:\n") - Errors.print_error(str(e)) - except TskDbDiffException as e: - Errors.print_error(str(e)) - except Exception as e: - Errors.print_error("Tests failed due to an error, try rebuilding or creating gold standards.\n") - Errors.print_error(str(e) + "\n") - print(traceback.format_exc()) - - def _compare_text(output_file, gold_file, process=None): - """Compare two text files. - - Args: - output_file: a pathto_File, the output text file - gold_file: a pathto_File, the input text file - pre-process: (optional) a function of String -> String that will be - called on each input file before the diff, if specified. - """ - if(not file_exists(output_file)): - return False - output_data = codecs.open(output_file, "r", "utf_8").read() - gold_data = codecs.open(gold_file, "r", "utf_8").read() - - if process is not None: - output_data = process(output_data) - gold_data = process(gold_data) - - if (not(gold_data == output_data)): - diff_path = os.path.splitext(os.path.basename(output_file))[0] - diff_path += "-Diff.txt" - diff_file = codecs.open(diff_path, "wb", "utf_8") - dffcmdlst = ["diff", output_file, gold_file] - subprocess.call(dffcmdlst, stdout = diff_file) - Errors.add_email_attachment(diff_path) - msg = "There was a difference in " - msg += os.path.basename(output_file) + ".\n" - Errors.add_email_msg(msg) - Errors.print_error(msg) - return False - else: - return True - - def _html_report_diff(gold_report_path, output_report_path): - """Compare the output and gold html reports. - - Args: - gold_report_path: a pathto_Dir, the gold HTML report directory - output_report_path: a pathto_Dir, the output HTML report directory - - Returns: - true, if the reports match, false otherwise. - """ - try: - gold_html_files = get_files_by_ext(gold_report_path, ".html") - output_html_files = get_files_by_ext(output_report_path, ".html") - - #ensure both reports have the same number of files and are in the same order - if(len(gold_html_files) != len(output_html_files)): - msg = "The reports did not have the same number or files." - msg += "One of the reports may have been corrupted." - Errors.print_error(msg) - else: - gold_html_files.sort() - output_html_files.sort() - - total = {"Gold": 0, "New": 0} - for gold, output in zip(gold_html_files, output_html_files): - count = TestResultsDiffer._compare_report_files(gold, output) - total["Gold"] += count[0] - total["New"] += count[1] - - okay = "The test report matches the gold report." - errors=["Gold report had " + str(total["Gold"]) +" errors", "New report had " + str(total["New"]) + " errors."] - print_report(errors, "REPORT COMPARISON", okay) - - if total["Gold"] == total["New"]: - return True - else: - Errors.print_error("The reports did not match each other.\n " + errors[0] +" and the " + errors[1]) - return False - except OSError as e: - e.print_error() - return False - except Exception as e: - Errors.print_error("Error: Unknown fatal error comparing reports.") - Errors.print_error(str(e) + "\n") - logging.critical(traceback.format_exc()) - return False - - def _compare_report_files(a_path, b_path): - """Compares the two specified report html files. - - Args: - a_path: a pathto_File, the first html report file - b_path: a pathto_File, the second html report file - - Returns: - a tuple of (Nat, Nat), which represent the length of each - unordered list in the html report files, or (0, 0) if the - lenghts are the same. - """ - a_file = open(a_path) - b_file = open(b_path) - a = a_file.read() - b = b_file.read() - a = a[a.find("
    "):] - b = b[b.find("
      "):] - - a_list = TestResultsDiffer._split(a, 50) - b_list = TestResultsDiffer._split(b, 50) - if not len(a_list) == len(b_list): - ex = (len(a_list), len(b_list)) - return ex - else: - return (0, 0) - - # Split a string into an array of string of the given size - def _split(input, size): - return [input[start:start+size] for start in range(0, len(input), size)] - - -class Reports(object): - def generate_reports(test_data): - """Generate the reports for a single test - - Args: - test_data: the TestData - """ - Reports._generate_html(test_data) - if test_data.main_config.global_csv: - Reports._generate_csv(test_data.main_config.global_csv, test_data) - else: - Reports._generate_csv(test_data.main_config.csv, test_data) - - def _generate_html(test_data): - """Generate the HTML log file.""" - # If the file doesn't exist yet, this is the first test_config to run for - # this test, so we need to make the start of the html log - html_log = test_data.main_config.html_log - if not file_exists(html_log): - Reports.write_html_head() - with open(html_log, "a") as html: - # The image title - title = "

      " + test_data.image_name + " \ - tested on " + socket.gethostname() + "

      \ -

      \ - Errors and Warnings |\ - Information |\ - General Output |\ - Logs\ -

      " - # The script errors found - if not test_data.overall_passed: - ids = 'errors1' - else: - ids = 'errors' - errors = "
      \ -

      Errors and Warnings

      \ -
      " - # For each error we have logged in the test_config - for error in test_data.printerror: - # Replace < and > to avoid any html display errors - errors += "

      " + error.replace("<", "<").replace(">", ">") + "

      " - # If there is a \n, we probably want a
      in the html - if "\n" in error: - errors += "
      " - errors += "
      " - - # Links to the logs - logs = "
      \ -

      Logs

      \ -
      " - logs_path = test_data.logs_dir - for file in os.listdir(logs_path): - logs += "

      " + file + "

      " - logs += "
      " - - # All the testing information - info = "
      \ -

      Information

      \ -
      \ - " - # The individual elements - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" - info += "" -# info += "" -# info += "" -# info += "" -# info += "" -# info += "" -# info += "" - info += "
      Image Path:" + test_data.image_file + "
      Image Name:" + test_data.image_name + "
      test_config Output Directory:" + test_data.main_config.output_dir + "
      Autopsy Version:" + test_data.autopsy_version + "
      Heap Space:" + test_data.heap_space + "
      Test Start Date:" + test_data.start_date + "
      Test End Date:" + test_data.end_date + "
      Total Test Time:" + test_data.total_test_time + "
      Total Ingest Time:" + test_data.total_ingest_time + "
      Exceptions Count:" + str(len(get_exceptions(test_data))) + "
      Autopsy OutOfMemoryExceptions:" + str(len(search_logs("OutOfMemoryException", test_data))) + "
      Autopsy OutOfMemoryErrors:" + str(len(search_logs("OutOfMemoryError", test_data))) + "
      Tika OutOfMemoryErrors/Exceptions:" + str(Reports._get_num_memory_errors("tika", test_data)) + "
      Solr OutOfMemoryErrors/Exceptions:" + str(Reports._get_num_memory_errors("solr", test_data)) + "
      TskCoreExceptions:" + str(len(search_log_set("autopsy", "TskCoreException", test_data))) + "
      TskDataExceptions:" + str(len(search_log_set("autopsy", "TskDataException", test_data))) + "
      Ingest Messages Count:" + str(test_data.ingest_messages) + "
      Indexed Files Count:" + str(test_data.indexed_files) + "
      Indexed File Chunks Count:" + str(test_data.indexed_chunks) + "
      Out Of Disk Space:\ -

      (will skew other test results)

      " + str(len(search_log_set("autopsy", "Stopping ingest due to low disk space on disk", test_data))) + "
      TSK Objects Count:" + str(test_data.db_diff_results.output_objs) + "
      Artifacts Count:" + str(test_data.db_diff_results.output_artifacts)+ "
      Attributes Count:" + str(test_data.db_diff_results.output_attrs) + "
      \ -
      " - # For all the general print statements in the test_config - output = "
      \ -

      General Output

      \ -
      " - # For each printout in the test_config's list - for out in test_data.printout: - output += "

      " + out + "

      " - # If there was a \n it probably means we want a
      in the html - if "\n" in out: - output += "
      " - output += "
      " - - html.write(title) - html.write(errors) - html.write(info) - html.write(logs) - html.write(output) - - def write_html_head(html_log): - """Write the top of the HTML log file. - - Args: - html_log: a pathto_File, the global HTML log - """ - with open(str(html_log), "a") as html: - head = "\ - \ - AutopsyTesttest_config Output\ - \ - \ - " - html.write(head) - - def write_html_foot(html_log): - """Write the bottom of the HTML log file. - - Args: - html_log: a pathto_File, the global HTML log - """ - with open(html_log, "a") as html: - head = "" - html.write(head) - - def html_add_images(html_log, full_image_names): - """Add all the image names to the HTML log. - - Args: - full_image_names: a listof_String, each representing an image name - html_log: a pathto_File, the global HTML log - """ - # If the file doesn't exist yet, this is the first test_config to run for - # this test, so we need to make the start of the html log - if not file_exists(html_log): - Reports.write_html_head(html_log) - with open(html_log, "a") as html: - links = [] - for full_name in full_image_names: - name = get_image_name(full_name) - links.append("" + name + "") - html.write("

      " + (" | ".join(links)) + "

      ") - - def _generate_csv(csv_path, test_data): - """Generate the CSV log file""" - # If the CSV file hasn't already been generated, this is the - # first run, and we need to add the column names - if not file_exists(csv_path): - Reports.csv_header(csv_path) - # Now add on the fields to a new row - with open(csv_path, "a") as csv: - # Variables that need to be written - vars = [] - vars.append( test_data.image_file ) - vars.append( test_data.image_name ) - vars.append( test_data.main_config.output_dir ) - vars.append( socket.gethostname() ) - vars.append( test_data.autopsy_version ) - vars.append( test_data.heap_space ) - vars.append( test_data.start_date ) - vars.append( test_data.end_date ) - vars.append( test_data.total_test_time ) - vars.append( test_data.total_ingest_time ) - vars.append( test_data.service_times ) - vars.append( str(len(get_exceptions(test_data))) ) - vars.append( str(Reports._get_num_memory_errors("autopsy", test_data)) ) - vars.append( str(Reports._get_num_memory_errors("tika", test_data)) ) - vars.append( str(Reports._get_num_memory_errors("solr", test_data)) ) - vars.append( str(len(search_log_set("autopsy", "TskCoreException", test_data))) ) - vars.append( str(len(search_log_set("autopsy", "TskDataException", test_data))) ) - vars.append( str(test_data.ingest_messages) ) - vars.append( str(test_data.indexed_files) ) - vars.append( str(test_data.indexed_chunks) ) - vars.append( str(len(search_log_set("autopsy", "Stopping ingest due to low disk space on disk", test_data))) ) -# vars.append( str(test_data.db_diff_results.output_objs) ) -# vars.append( str(test_data.db_diff_results.output_artifacts) ) -# vars.append( str(test_data.db_diff_results.output_objs) ) - vars.append( make_local_path("gold", test_data.image_name, DB_FILENAME) ) -# vars.append( test_data.db_diff_results.get_artifact_comparison() ) -# vars.append( test_data.db_diff_results.get_attribute_comparison() ) - vars.append( make_local_path("gold", test_data.image_name, "standard.html") ) - vars.append( str(test_data.html_report_passed) ) - vars.append( test_data.ant_to_string() ) - # Join it together with a ", " - output = "|".join(vars) - output += "\n" - # Write to the log! - csv.write(output) - - def csv_header(csv_path): - """Generate the CSV column names.""" - with open(csv_path, "w") as csv: - titles = [] - titles.append("Image Path") - titles.append("Image Name") - titles.append("Output test_config Directory") - titles.append("Host Name") - titles.append("Autopsy Version") - titles.append("Heap Space Setting") - titles.append("Test Start Date") - titles.append("Test End Date") - titles.append("Total Test Time") - titles.append("Total Ingest Time") - titles.append("Service Times") - titles.append("Autopsy Exceptions") - titles.append("Autopsy OutOfMemoryErrors/Exceptions") - titles.append("Tika OutOfMemoryErrors/Exceptions") - titles.append("Solr OutOfMemoryErrors/Exceptions") - titles.append("TskCoreExceptions") - titles.append("TskDataExceptions") - titles.append("Ingest Messages Count") - titles.append("Indexed Files Count") - titles.append("Indexed File Chunks Count") - titles.append("Out Of Disk Space") -# titles.append("Tsk Objects Count") -# titles.append("Artifacts Count") -# titles.append("Attributes Count") - titles.append("Gold Database Name") -# titles.append("Artifacts Comparison") -# titles.append("Attributes Comparison") - titles.append("Gold Report Name") - titles.append("Report Comparison") - titles.append("Ant Command Line") - output = "|".join(titles) - output += "\n" - csv.write(output) - - def _get_num_memory_errors(type, test_data): - """Get the number of OutOfMemory errors and Exceptions. - - Args: - type: a String representing the type of log to check. - test_data: the TestData to examine. - """ - return (len(search_log_set(type, "OutOfMemoryError", test_data)) + - len(search_log_set(type, "OutOfMemoryException", test_data))) - -class Logs(object): - - def generate_log_data(test_data): - """Find and handle relevent data from the Autopsy logs. - - Args: - test_data: the TestData whose logs to examine - """ - Logs._generate_common_log(test_data) - try: - Logs._fill_ingest_data(test_data) - except Exception as e: - Errors.print_error("Error: Unknown fatal error when filling test_config data.") - Errors.print_error(str(e) + "\n") - logging.critical(traceback.format_exc()) - # If running in verbose mode (-v) - if test_data.main_config.args.verbose: - errors = Logs._report_all_errors() - okay = "No warnings or errors in any log files." - print_report(errors, "VERBOSE", okay) - - def _generate_common_log(test_data): - """Generate the common log, the log of all exceptions and warnings from - each log file generated by Autopsy. - - Args: - test_data: the TestData to generate a log for - """ - try: - logs_path = test_data.logs_dir - common_log = codecs.open(test_data.common_log_path, "w", "utf_8") - warning_log = codecs.open(test_data.warning_log, "w", "utf_8") - common_log.write("--------------------------------------------------\n") - common_log.write(test_data.image_name + "\n") - common_log.write("--------------------------------------------------\n") - rep_path = make_local_path(test_data.main_config.output_dir) - rep_path = rep_path.replace("\\\\", "\\") - for file in os.listdir(logs_path): - log = codecs.open(make_path(logs_path, file), "r", "utf_8") - for line in log: - line = line.replace(rep_path, "test_data") - if line.startswith("Exception"): - common_log.write(file +": " + line) - elif line.startswith("Error"): - common_log.write(file +": " + line) - elif line.startswith("SEVERE"): - common_log.write(file +":" + line) - else: - warning_log.write(file +": " + line) - log.close() - common_log.write("\n") - common_log.close() - print(test_data.sorted_log) - srtcmdlst = ["sort", test_data.common_log_path, "-o", test_data.sorted_log] - subprocess.call(srtcmdlst) - except (OSError, IOError) as e: - Errors.print_error("Error: Unable to generate the common log.") - Errors.print_error(str(e) + "\n") - Errors.print_error(traceback.format_exc()) - logging.critical(traceback.format_exc()) - - def _fill_ingest_data(test_data): - """Fill the TestDatas variables that require the log files. - - Args: - test_data: the TestData to modify - """ - try: - # Open autopsy.log.0 - log_path = make_path(test_data.logs_dir, "autopsy.log.0") - log = open(log_path) - - # Set the TestData start time based off the first line of autopsy.log.0 - # *** If logging time format ever changes this will break *** - test_data.start_date = log.readline().split(" org.")[0] - - # Set the test_data ending time based off the "create" time (when the file was copied) - test_data.end_date = time.ctime(os.path.getmtime(log_path)) - except IOError as e: - Errors.print_error("Error: Unable to open autopsy.log.0.") - Errors.print_error(str(e) + "\n") - logging.warning(traceback.format_exc()) - # Start date must look like: "Jul 16, 2012 12:57:53 PM" - # End date must look like: "Mon Jul 16 13:02:42 2012" - # *** If logging time format ever changes this will break *** - start = datetime.datetime.strptime(test_data.start_date, "%b %d, %Y %I:%M:%S %p") - end = datetime.datetime.strptime(test_data.end_date, "%a %b %d %H:%M:%S %Y") - test_data.total_test_time = str(end - start) - - try: - # Set Autopsy version, heap space, ingest time, and service times - - version_line = search_logs("INFO: Application name: Autopsy, version:", test_data)[0] - test_data.autopsy_version = get_word_at(version_line, 5).rstrip(",") - - test_data.heap_space = search_logs("Heap memory usage:", test_data)[0].rstrip().split(": ")[1] - - ingest_line = search_logs("Ingest (including enqueue)", test_data)[0] - test_data.total_ingest_time = get_word_at(ingest_line, 6).rstrip() - - message_line = search_log_set("autopsy", "Ingest messages count:", test_data)[0] - test_data.ingest_messages = int(message_line.rstrip().split(": ")[2]) - - files_line = search_log_set("autopsy", "Indexed files count:", test_data)[0] - test_data.indexed_files = int(files_line.rstrip().split(": ")[2]) - - chunks_line = search_log_set("autopsy", "Indexed file chunks count:", test_data)[0] - test_data.indexed_chunks = int(chunks_line.rstrip().split(": ")[2]) - except (OSError, IOError) as e: - Errors.print_error("Error: Unable to find the required information to fill test_config data.") - Errors.print_error(str(e) + "\n") - logging.critical(traceback.format_exc()) - print(traceback.format_exc()) - try: - service_lines = search_log("autopsy.log.0", "to process()", test_data) - service_list = [] - for line in service_lines: - words = line.split(" ") - # Kind of forcing our way into getting this data - # If this format changes, the tester will break - i = words.index("secs.") - times = words[i-4] + " " - times += words[i-3] + " " - times += words[i-2] + " " - times += words[i-1] + " " - times += words[i] - service_list.append(times) - test_data.service_times = "; ".join(service_list) - except (OSError, IOError) as e: - Errors.print_error("Error: Unknown fatal error when finding service times.") - Errors.print_error(str(e) + "\n") - logging.critical(traceback.format_exc()) - - def _report_all_errors(): - """Generate a list of all the errors found in the common log. - - Returns: - a listof_String, the errors found in the common log - """ - try: - return get_warnings() + get_exceptions() - except (OSError, IOError) as e: - Errors.print_error("Error: Unknown fatal error when reporting all errors.") - Errors.print_error(str(e) + "\n") - logging.warning(traceback.format_exc()) - - def search_common_log(string, test_data): - """Search the common log for any instances of a given string. - - Args: - string: the String to search for. - test_data: the TestData that holds the log to search. - - Returns: - a listof_String, all the lines that the string is found on - """ - results = [] - log = codecs.open(test_data.common_log_path, "r", "utf_8") - for line in log: - if string in line: - results.append(line) - log.close() - return results - - -def print_report(errors, name, okay): - """Print a report with the specified information. - - Args: - errors: a listof_String, the errors to report. - name: a String, the name of the report. - okay: the String to print when there are no errors. - """ - if errors: - Errors.print_error("--------< " + name + " >----------") - for error in errors: - Errors.print_error(str(error)) - Errors.print_error("--------< / " + name + " >--------\n") - else: - Errors.print_out("-----------------------------------------------------------------") - Errors.print_out("< " + name + " - " + okay + " />") - Errors.print_out("-----------------------------------------------------------------\n") - - -def get_exceptions(test_data): - """Get a list of the exceptions in the autopsy logs. - - Args: - test_data: the TestData to use to find the exceptions. - Returns: - a listof_String, the exceptions found in the logs. - """ - exceptions = [] - logs_path = test_data.logs_dir - results = [] - for file in os.listdir(logs_path): - if "autopsy.log" in file: - log = codecs.open(make_path(logs_path, file), "r", "utf_8") - ex = re.compile("\SException") - er = re.compile("\SError") - for line in log: - if ex.search(line) or er.search(line): - exceptions.append(line) - log.close() - return exceptions - -def get_warnings(test_data): - """Get a list of the warnings listed in the common log. - - Args: - test_data: the TestData to use to find the warnings - - Returns: - listof_String, the warnings found. - """ - warnings = [] - common_log = codecs.open(test_data.warning_log, "r", "utf_8") - for line in common_log: - if "warning" in line.lower(): - warnings.append(line) - common_log.close() - return warnings - -def copy_logs(test_data): - """Copy the Autopsy generated logs to output directory. - - Args: - test_data: the TestData whose logs will be copied - """ - try: - log_dir = os.path.join("..", "..", "Testing","build","test","qa-functional","work","userdir0","var","log") - shutil.copytree(log_dir, test_data.logs_dir) - except OSError as e: - printerror(test_data,"Error: Failed to copy the logs.") - printerror(test_data,str(e) + "\n") - logging.warning(traceback.format_exc()) - -def setDay(): - global Day - Day = int(strftime("%d", localtime())) - -def getLastDay(): - return Day - -def getDay(): - return int(strftime("%d", localtime())) - -def newDay(): - return getLastDay() != getDay() - -#------------------------------------------------------------# -# Exception classes to manage "acceptable" thrown exceptions # -# versus unexpected and fatal exceptions # -#------------------------------------------------------------# - -class FileNotFoundException(Exception): - """ - If a file cannot be found by one of the helper functions, - they will throw a FileNotFoundException unless the purpose - is to return False. - """ - def __init__(self, file): - self.file = file - self.strerror = "FileNotFoundException: " + file - - def print_error(self): - Errors.print_error("Error: File could not be found at:") - Errors.print_error(self.file + "\n") - - def error(self): - error = "Error: File could not be found at:\n" + self.file + "\n" - return error - -class DirNotFoundException(Exception): - """ - If a directory cannot be found by a helper function, - it will throw this exception - """ - def __init__(self, dir): - self.dir = dir - self.strerror = "DirNotFoundException: " + dir - - def print_error(self): - Errors.print_error("Error: Directory could not be found at:") - Errors.print_error(self.dir + "\n") - - def error(self): - error = "Error: Directory could not be found at:\n" + self.dir + "\n" - return error - - -class Errors: - """A class used to manage error reporting. - - Attributes: - printout: a listof_String, the non-error messages that were printed - printerror: a listof_String, the error messages that were printed - email_body: a String, the body of the report email - email_msg_prefix: a String, the prefix for lines added to the email - email_attchs: a listof_pathto_File, the files to be attached to the - report email - """ - printout = [] - printerror = [] - email_body = "" - email_msg_prefix = "Configuration" - email_attachs = [] - - def set_testing_phase(image_name): - """Change the email message prefix to be the given testing phase. - - Args: - image_name: a String, representing the current image being tested - """ - Errors.email_msg_prefix = image_name - - def print_out(msg): - """Print out an informational message. - - Args: - msg: a String, the message to be printed - """ - print(msg) - Errors.printout.append(msg) - - def print_error(msg): - """Print out an error message. - - Args: - msg: a String, the error message to be printed. - """ - print(msg) - Errors.printerror.append(msg) - - def clear_print_logs(): - """Reset the image-specific attributes of the Errors class.""" - Errors.printout = [] - Errors.printerror = [] - - def add_email_msg(msg): - """Add the given message to the body of the report email. - - Args: - msg: a String, the message to be added to the email - """ - Errors.email_body += Errors.email_msg_prefix + ":" + msg - - def add_email_attachment(path): - """Add the given file to be an attachment for the report email - - Args: - file: a pathto_File, the file to add - """ - Errors.email_attachs.append(path) - - -class DiffResults(object): - """Container for the results of the database diff tests. - - Stores artifact, object, and attribute counts and comparisons generated by - TskDbDiff. - - Attributes: - gold_attrs: a Nat, the number of gold attributes - output_attrs: a Nat, the number of output attributes - gold_objs: a Nat, the number of gold objects - output_objs: a Nat, the number of output objects - artifact_comp: a listof_String, describing the differences - attribute_comp: a listof_String, describing the differences - passed: a boolean, did the diff pass? - """ - def __init__(self, tsk_diff): - """Inits a DiffResults - - Args: - tsk_diff: a TskDBDiff - """ - self.gold_attrs = tsk_diff.gold_attributes - self.output_attrs = tsk_diff.autopsy_attributes - self.gold_objs = tsk_diff.gold_objects - self.output_objs = tsk_diff.autopsy_objects - self.artifact_comp = tsk_diff.artifact_comparison - self.attribute_comp = tsk_diff.attribute_comparison - self.gold_artifacts = len(tsk_diff.gold_artifacts) - self.output_artifacts = len(tsk_diff.autopsy_artifacts) - self.passed = tsk_diff.passed - - def get_artifact_comparison(self): - if not self.artifact_comp: - return "All counts matched" - else: - return "; ".join(self.artifact_comp) - - def get_attribute_comparison(self): - if not self.attribute_comp: - return "All counts matched" - list = [] - for error in self.attribute_comp: - list.append(error) - return ";".join(list) - - -#-------------------------------------------------------------# -# Parses argv and stores booleans to match command line input # -#-------------------------------------------------------------# -class Args(object): - """A container for command line options and arguments. - - Attributes: - single: a boolean indicating whether to run in single file mode - single_file: an Image to run the test on - rebuild: a boolean indicating whether to run in rebuild mode - list: a boolean indicating a config file was specified - unallocated: a boolean indicating unallocated space should be ignored - ignore: a boolean indicating the input directory should be ingnored - keep: a boolean indicating whether to keep the SOLR index - verbose: a boolean indicating whether verbose output should be printed - exeception: a boolean indicating whether errors containing exception - exception_string should be printed - exception_sring: a String representing and exception name - fr: a boolean indicating whether gold standard images will be downloaded - """ - def __init__(self): - self.single = False - self.single_file = "" - self.rebuild = False - self.list = False - self.config_file = "" - self.unallocated = False - self.ignore = False - self.keep = False - self.verbose = False - self.exception = False - self.exception_string = "" - self.fr = False - self.email_enabled = False - - def parse(self): - """Get the command line arguments and parse them.""" - nxtproc = [] - nxtproc.append("python3") - nxtproc.append(sys.argv.pop(0)) - while sys.argv: - arg = sys.argv.pop(0) - nxtproc.append(arg) - if(arg == "-f"): - #try: @@@ Commented out until a more specific except statement is added - arg = sys.argv.pop(0) - print("Running on a single file:") - print(path_fix(arg) + "\n") - self.single = True - self.single_file = path_fix(arg) - #except: - # print("Error: No single file given.\n") - # return False - elif(arg == "-r" or arg == "--rebuild"): - print("Running in rebuild mode.\n") - self.rebuild = True - elif(arg == "-l" or arg == "--list"): - try: - arg = sys.argv.pop(0) - nxtproc.append(arg) - print("Running from configuration file:") - print(arg + "\n") - self.list = True - self.config_file = arg - except: - print("Error: No configuration file given.\n") - return False - elif(arg == "-u" or arg == "--unallocated"): - print("Ignoring unallocated space.\n") - self.unallocated = True - elif(arg == "-k" or arg == "--keep"): - print("Keeping the Solr index.\n") - self.keep = True - elif(arg == "-v" or arg == "--verbose"): - print("Running in verbose mode:") - print("Printing all thrown exceptions.\n") - self.verbose = True - elif(arg == "-e" or arg == "--exception"): - try: - arg = sys.argv.pop(0) - nxtproc.append(arg) - print("Running in exception mode: ") - print("Printing all exceptions with the string '" + arg + "'\n") - self.exception = True - self.exception_string = arg - except: - print("Error: No exception string given.") - elif arg == "-h" or arg == "--help": - print(usage()) - return False - elif arg == "-fr" or arg == "--forcerun": - print("Not downloading new images") - self.fr = True - elif arg == "-e" or arg == "-email": - self.email_enabled = True - else: - print(usage()) - return False - # Return the args were sucessfully parsed - return self._sanity_check() - - def _sanity_check(self): - """Check to make sure there are no conflicting arguments and the - specified files exist. - - Returns: - False if there are conflicting arguments or a specified file does - not exist, True otherwise - """ - if self.single and self.list: - print("Cannot run both from config file and on a single file.") - return False - if self.list: - if not file_exists(self.config_file): - print("Configuration file does not exist at:", - self.config_file) - return False - elif self.single: - if not file_exists(self.single_file): - msg = "Image file does not exist at: " + self.single_file - return False - if (not self.single) and (not self.ignore) and (not self.list): - self.config_file = "config.xml" - if not file_exists(self.config_file): - msg = "Configuration file does not exist at: " + self.config_file - return False - - return True - -#### -# Helper Functions -#### -def search_logs(string, test_data): - """Search through all the known log files for a given string. - - Args: - string: the String to search for. - test_data: the TestData that holds the logs to search. - - Returns: - a listof_String, the lines that contained the given String. - """ - logs_path = test_data.logs_dir - results = [] - for file in os.listdir(logs_path): - log = codecs.open(make_path(logs_path, file), "r", "utf_8") - for line in log: - if string in line: - results.append(line) - log.close() - return results - -def search_log(log, string, test_data): - """Search the given log for any instances of a given string. - - Args: - log: a pathto_File, the log to search in - string: the String to search for. - test_data: the TestData that holds the log to search. - - Returns: - a listof_String, all the lines that the string is found on - """ - logs_path = make_path(test_data.logs_dir, log) - try: - results = [] - log = codecs.open(logs_path, "r", "utf_8") - for line in log: - if string in line: - results.append(line) - log.close() - if results: - return results - except: - raise FileNotFoundException(logs_path) - -# Search through all the the logs of the given type -# Types include autopsy, tika, and solr -def search_log_set(type, string, test_data): - """Search through all logs to the given type for the given string. - - Args: - type: the type of log to search in. - string: the String to search for. - test_data: the TestData containing the logs to search. - - Returns: - a listof_String, the lines on which the String was found. - """ - logs_path = test_data.logs_dir - results = [] - for file in os.listdir(logs_path): - if type in file: - log = codecs.open(make_path(logs_path, file), "r", "utf_8") - for line in log: - if string in line: - results.append(line) - log.close() - return results - - -def clear_dir(dir): - """Clears all files from a directory and remakes it. - - Args: - dir: a pathto_Dir, the directory to clear - """ - try: - if dir_exists(dir): - shutil.rmtree(dir) - os.makedirs(dir) - return True; - except OSError as e: - printerror(test_data,"Error: Cannot clear the given directory:") - printerror(test_data,dir + "\n") - print(str(e)) - return False; - -def del_dir(dir): - """Delete the given directory. - - Args: - dir: a pathto_Dir, the directory to delete - """ - try: - if dir_exists(dir): - shutil.rmtree(dir) - return True; - except: - printerror(test_data,"Error: Cannot delete the given directory:") - printerror(test_data,dir + "\n") - return False; - -def get_file_in_dir(dir, ext): - """Returns the first file in the given directory with the given extension. - - Args: - dir: a pathto_Dir, the directory to search - ext: a String, the extension to search for - - Returns: - pathto_File, the file that was found - """ - try: - for file in os.listdir(dir): - if file.endswith(ext): - return make_path(dir, file) - # If nothing has been found, raise an exception - raise FileNotFoundException(dir) - except: - raise DirNotFoundException(dir) - -def find_file_in_dir(dir, name, ext): - """Find the file with the given name in the given directory. - - Args: - dir: a pathto_Dir, the directory to search - name: a String, the basename of the file to search for - ext: a String, the extension of the file to search for - """ - try: - for file in os.listdir(dir): - if file.startswith(name): - if file.endswith(ext): - return make_path(dir, file) - raise FileNotFoundException(dir) - except: - raise DirNotFoundException(dir) - - -class OS: - LINUX, MAC, WIN, CYGWIN = range(4) - - -if __name__ == "__main__": - global SYS - if _platform == "linux" or _platform == "linux2": - SYS = OS.LINUX - elif _platform == "darwin": - SYS = OS.MAC - elif _platform == "win32": - SYS = OS.WIN - elif _platform == "cygwin": - SYS = OS.CYGWIN - - if SYS is OS.WIN or SYS is OS.CYGWIN: - main() - else: - print("We only support Windows and Cygwin at this time.") +#!/usr/bin/python +# -*- coding: utf_8 -*- + + # Autopsy Forensic Browser + # + # Copyright 2013 Basis Technology Corp. + # + # Licensed under the Apache License, Version 2.0 (the "License"); + # you may not use this file except in compliance with the License. + # You may obtain a copy of the License at + # + # http://www.apache.org/licenses/LICENSE-2.0 + # + # Unless required by applicable law or agreed to in writing, software + # distributed under the License is distributed on an "AS IS" BASIS, + # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + # See the License for the specific language governing permissions and + # limitations under the License. +from tskdbdiff import TskDbDiff, TskDbDiffException +import codecs +import datetime +import logging +import os +import re +import shutil +import socket +import sqlite3 +import subprocess +import sys +from sys import platform as _platform +import time +import traceback +import xml +from time import localtime, strftime +from xml.dom.minidom import parse, parseString +import smtplib +from email.mime.image import MIMEImage +from email.mime.multipart import MIMEMultipart +from email.mime.text import MIMEText +import re +import zipfile +import zlib +import Emailer +import srcupdater +from regression_utils import * + +# +# Please read me... +# +# This is the regression testing Python script. +# It uses an ant command to run build.xml for RegressionTest.java +# +# The code is cleanly sectioned and commented. +# Please follow the current formatting. +# It is a long and potentially confusing script. +# +# Variable, function, and class names are written in Python conventions: +# this_is_a_variable this_is_a_function() ThisIsAClass +# +# + + +# Data Definitions: +# +# pathto_X: A path to type X. +# ConfigFile: An XML file formatted according to the template in myconfig.xml +# ParsedConfig: A dom object that represents a ConfigFile +# SQLCursor: A cursor recieved from a connection to an SQL database +# Nat: A Natural Number +# Image: An image +# + +# Enumeration of database types used for the simplification of generating database paths +DBType = enum('OUTPUT', 'GOLD', 'BACKUP') + +# Common filename of the output and gold databases (although they are in different directories +DB_FILENAME = "autopsy.db" + +# Backup database filename +BACKUP_DB_FILENAME = "autopsy_backup.db" + +# TODO: Double check this purpose statement +# Folder name for gold standard database testing +AUTOPSY_TEST_CASE = "AutopsyTestCase" + +# TODO: Double check this purpose statement +# The filename of the log to store error messages +COMMON_LOG = "AutopsyErrors.txt" + +Day = 0 + +#----------------------# +# Main # +#----------------------# +def main(): + """Parse the command-line arguments, create the configuration, and run the tests.""" + args = Args() + parse_result = args.parse() + test_config = TestConfiguration(args) + # The arguments were given wrong: + if not parse_result: + return + if(not args.fr): + antin = ["ant"] + antin.append("-f") + antin.append(os.path.join("..","..","build.xml")) + antin.append("test-download-imgs") + if SYS is OS.CYGWIN: + subprocess.call(antin) + elif SYS is OS.WIN: + theproc = subprocess.Popen(antin, shell = True, stdout=subprocess.PIPE) + theproc.communicate() + # Otherwise test away! + TestRunner.run_tests(test_config) + + +class TestRunner(object): + """A collection of functions to run the regression tests.""" + + def run_tests(test_config): + """Run the tests specified by the main TestConfiguration. + + Executes the AutopsyIngest for each image and dispatches the results based on + the mode (rebuild or testing) + """ + test_data_list = [ TestData(image, test_config) for image in test_config.images ] + + Reports.html_add_images(test_config.html_log, test_config.images) + + logres =[] + for test_data in test_data_list: + Errors.clear_print_logs() + Errors.set_testing_phase(test_data.image) + if not (test_config.args.rebuild or os.path.exists(test_data.gold_archive)): + msg = "Gold standard doesn't exist, skipping image:" + Errors.print_error(msg) + Errors.print_error(test_data.gold_archive) + continue + TestRunner._run_autopsy_ingest(test_data) + + if test_config.args.rebuild: + TestRunner.rebuild(test_data) + else: + logres.append(TestRunner._run_test(test_data)) + test_data.printout = Errors.printout + test_data.printerror = Errors.printerror + + Reports.write_html_foot(test_config.html_log) + # TODO: move this elsewhere + if (len(logres)>0): + for lm in logres: + for ln in lm: + Errors.add_email_msg(ln) + + # TODO: possibly worth putting this in a sub method + if all([ test_data.overall_passed for test_data in test_data_list ]): + Errors.add_email_msg("All images passed.\n") + else: + msg = "The following images failed:\n" + for test_data in test_data_list: + if not test_data.overall_passed: + msg += "\t" + test_data.image + "\n" + Errors.add_email_msg(msg) + html = open(test_config.html_log) + Errors.add_email_attachment(html.name) + html.close() + + if test_config.email_enabled: + Emailer.send_email(test_config.mail_to, test_config.mail_server, + test_config.mail_subject, Errors.email_body, Errors.email_attachs) + + def _run_autopsy_ingest(test_data): + """Run Autopsy ingest for the image in the given TestData. + + Also generates the necessary logs for rebuilding or diff. + + Args: + test_data: the TestData to run the ingest on. + """ + if image_type(test_data.image_file) == IMGTYPE.UNKNOWN: + Errors.print_error("Error: Image type is unrecognized:") + Errors.print_error(test_data.image_file + "\n") + return + + logging.debug("--------------------") + logging.debug(test_data.image_name) + logging.debug("--------------------") + TestRunner._run_ant(test_data) + time.sleep(2) # Give everything a second to process + + try: + # Dump the database before we diff or use it for rebuild + TskDbDiff.dump_output_db(test_data.get_db_path(DBType.OUTPUT), test_data.get_db_dump_path(DBType.OUTPUT), + test_data.get_sorted_data_path(DBType.OUTPUT)) + except sqlite3.OperationalError as e: + print("Ingest did not run properly.", + "Make sure no other instances of Autopsy are open and try again.") + sys.exit() + + # merges logs into a single log for later diff / rebuild + copy_logs(test_data) + Logs.generate_log_data(test_data) + + TestRunner._handle_solr(test_data) + TestRunner._handle_exception(test_data) + + #TODO: figure out return type of _run_test (logres) + def _run_test(test_data): + """Compare the results of the output to the gold standard. + + Args: + test_data: the TestData + + Returns: + logres? + """ + TestRunner._extract_gold(test_data) + + # Look for core exceptions + # @@@ Should be moved to TestResultsDiffer, but it didn't know about logres -- need to look into that + logres = Logs.search_common_log("TskCoreException", test_data) + + TestResultsDiffer.run_diff(test_data) + test_data.overall_passed = (test_data.html_report_passed and + test_data.errors_diff_passed and test_data.db_diff_passed) + + Reports.generate_reports(test_data) + if(not test_data.overall_passed): + Errors.add_email_attachment(test_data.common_log_path) + return logres + + def _extract_gold(test_data): + """Extract gold archive file to output/gold/tmp/ + + Args: + test_data: the TestData + """ + extrctr = zipfile.ZipFile(test_data.gold_archive, 'r', compression=zipfile.ZIP_DEFLATED) + extrctr.extractall(test_data.main_config.gold) + extrctr.close + time.sleep(2) + + def _handle_solr(test_data): + """Clean up SOLR index if in keep mode (-k). + + Args: + test_data: the TestData + """ + if not test_data.main_config.args.keep: + if clear_dir(test_data.solr_index): + print_report([], "DELETE SOLR INDEX", "Solr index deleted.") + else: + print_report([], "KEEP SOLR INDEX", "Solr index has been kept.") + + def _handle_exception(test_data): + """If running in exception mode, print exceptions to log. + + Args: + test_data: the TestData + """ + if test_data.main_config.args.exception: + exceptions = search_logs(test_data.main_config.args.exception_string, test_data) + okay = ("No warnings or exceptions found containing text '" + + test_data.main_config.args.exception_string + "'.") + print_report(exceptions, "EXCEPTION", okay) + + def rebuild(test_data): + """Rebuild the gold standard with the given TestData. + + Copies the test-generated database and html report files into the gold directory. + """ + test_config = test_data.main_config + # Errors to print + errors = [] + # Delete the current gold standards + gold_dir = test_config.img_gold + clear_dir(test_config.img_gold) + tmpdir = make_path(gold_dir, test_data.image_name) + dbinpth = test_data.get_db_path(DBType.OUTPUT) + dboutpth = make_path(tmpdir, DB_FILENAME) + dataoutpth = make_path(tmpdir, test_data.image_name + "SortedData.txt") + dbdumpinpth = test_data.get_db_dump_path(DBType.OUTPUT) + dbdumpoutpth = make_path(tmpdir, test_data.image_name + "DBDump.txt") + if not os.path.exists(test_config.img_gold): + os.makedirs(test_config.img_gold) + if not os.path.exists(tmpdir): + os.makedirs(tmpdir) + try: + shutil.copy(dbinpth, dboutpth) + if file_exists(test_data.get_sorted_data_path(DBType.OUTPUT)): + shutil.copy(test_data.get_sorted_data_path(DBType.OUTPUT), dataoutpth) + shutil.copy(dbdumpinpth, dbdumpoutpth) + error_pth = make_path(tmpdir, test_data.image_name+"SortedErrors.txt") + shutil.copy(test_data.sorted_log, error_pth) + except IOError as e: + Errors.print_error(str(e)) + Errors.add_email_message("Not rebuilt properly") + print(str(e)) + print(traceback.format_exc()) + # Rebuild the HTML report + output_html_report_dir = test_data.get_html_report_path(DBType.OUTPUT) + gold_html_report_dir = make_path(tmpdir, "Report") + + try: + shutil.copytree(output_html_report_dir, gold_html_report_dir) + except OSError as e: + errors.append(e.error()) + except Exception as e: + errors.append("Error: Unknown fatal error when rebuilding the gold html report.") + errors.append(str(e) + "\n") + print(traceback.format_exc()) + oldcwd = os.getcwd() + zpdir = gold_dir + os.chdir(zpdir) + os.chdir("..") + img_gold = "tmp" + img_archive = make_path(test_data.image_name+"-archive.zip") + comprssr = zipfile.ZipFile(img_archive, 'w',compression=zipfile.ZIP_DEFLATED) + TestRunner.zipdir(img_gold, comprssr) + comprssr.close() + os.chdir(oldcwd) + del_dir(test_config.img_gold) + okay = "Sucessfully rebuilt all gold standards." + print_report(errors, "REBUILDING", okay) + + def zipdir(path, zip): + for root, dirs, files in os.walk(path): + for file in files: + zip.write(os.path.join(root, file)) + + def _run_ant(test_data): + """Construct and run the ant build command for the given TestData. + + Tests Autopsy by calling RegressionTest.java via the ant build file. + + Args: + test_data: the TestData + """ + test_config = test_data.main_config + # Set up the directories + if dir_exists(test_data.output_path): + shutil.rmtree(test_data.output_path) + os.makedirs(test_data.output_path) + test_data.ant = ["ant"] + test_data.ant.append("-v") + test_data.ant.append("-f") + # case.ant.append(case.build_path) + test_data.ant.append(os.path.join("..","..","Testing","build.xml")) + test_data.ant.append("regression-test") + test_data.ant.append("-l") + test_data.ant.append(test_data.antlog_dir) + test_data.ant.append("-Dimg_path=" + test_data.image_file) + test_data.ant.append("-Dknown_bad_path=" + test_config.known_bad_path) + test_data.ant.append("-Dkeyword_path=" + test_config.keyword_path) + test_data.ant.append("-Dnsrl_path=" + test_config.nsrl_path) + test_data.ant.append("-Dgold_path=" + test_config.gold) + test_data.ant.append("-Dout_path=" + + make_local_path(test_data.output_path)) + test_data.ant.append("-Dignore_unalloc=" + "%s" % test_config.args.unallocated) + test_data.ant.append("-Dtest.timeout=" + str(test_config.timeout)) + + Errors.print_out("Ingesting Image:\n" + test_data.image_file + "\n") + Errors.print_out("CMD: " + " ".join(test_data.ant)) + Errors.print_out("Starting test...\n") + antoutpth = make_local_path(test_data.main_config.output_dir, "antRunOutput.txt") + antout = open(antoutpth, "a") + if SYS is OS.CYGWIN: + subprocess.call(test_data.ant, stdout=subprocess.PIPE) + elif SYS is OS.WIN: + theproc = subprocess.Popen(test_data.ant, shell = True, stdout=subprocess.PIPE) + theproc.communicate() + antout.close() + + +class TestData(object): + """Container for the input and output of a single image. + + Represents data for the test of a single image, including path to the image, + database paths, etc. + + Attributes: + main_config: the global TestConfiguration + ant: a listof_String, the ant command for this TestData + image_file: a pathto_Image, the image for this TestData + image: a String, the image file's name + image_name: a String, the image file's name with a trailing (0) + output_path: pathto_Dir, the output directory for this TestData + autopsy_data_file: a pathto_File, the IMAGE_NAMEAutopsy_data.txt file + warning_log: a pathto_File, the AutopsyLogs.txt file + antlog_dir: a pathto_File, the antlog.txt file + test_dbdump: a pathto_File, the database dump, IMAGENAMEDump.txt + common_log_path: a pathto_File, the IMAGE_NAMECOMMON_LOG file + sorted_log: a pathto_File, the IMAGENAMESortedErrors.txt file + reports_dir: a pathto_Dir, the AutopsyTestCase/Reports folder + gold_data_dir: a pathto_Dir, the gold standard directory + gold_archive: a pathto_File, the gold standard archive + logs_dir: a pathto_Dir, the location where autopsy logs are stored + solr_index: a pathto_Dir, the locatino of the solr index + html_report_passed: a boolean, did the HTML report diff pass? + errors_diff_passed: a boolean, did the error diff pass? + db_diff_passed: a boolean, did the db diff pass? + overall_passed: a boolean, did the test pass? + total_test_time: a String representation of the test duration + start_date: a String representation of this TestData's start date + end_date: a String representation of the TestData's end date + total_ingest_time: a String representation of the total ingest time + artifact_count: a Nat, the number of artifacts + artifact_fail: a Nat, the number of artifact failures + heap_space: a String representation of TODO + service_times: a String representation of TODO + autopsy_version: a String, the version of autopsy that was run + ingest_messages: a Nat, the number of ingest messages + indexed_files: a Nat, the number of files indexed during the ingest + indexed_chunks: a Nat, the number of chunks indexed during the ingest + printerror: a listof_String, the error messages printed during this TestData's test + printout: a listof_String, the messages pritned during this TestData's test + """ + + def __init__(self, image, main_config): + """Init this TestData with it's image and the test configuration. + + Args: + image: the Image to be tested. + main_config: the global TestConfiguration. + """ + # Configuration Data + self.main_config = main_config + self.ant = [] + self.image_file = str(image) + # TODO: This 0 should be be refactored out, but it will require rebuilding and changing of outputs. + self.image = get_image_name(self.image_file) + self.image_name = self.image + "(0)" + # Directory structure and files + self.output_path = make_path(self.main_config.output_dir, self.image_name) + self.autopsy_data_file = make_path(self.output_path, self.image_name + "Autopsy_data.txt") + self.warning_log = make_local_path(self.output_path, "AutopsyLogs.txt") + self.antlog_dir = make_local_path(self.output_path, "antlog.txt") + self.test_dbdump = make_path(self.output_path, self.image_name + + "DBDump.txt") + self.common_log_path = make_local_path(self.output_path, self.image_name + COMMON_LOG) + self.sorted_log = make_local_path(self.output_path, self.image_name + "SortedErrors.txt") + self.reports_dir = make_path(self.output_path, AUTOPSY_TEST_CASE, "Reports") + self.gold_data_dir = make_path(self.main_config.img_gold, self.image_name) + self.gold_archive = make_path(self.main_config.gold, + self.image_name + "-archive.zip") + self.logs_dir = make_path(self.output_path, "logs") + self.solr_index = make_path(self.output_path, AUTOPSY_TEST_CASE, + "ModuleOutput", "KeywordSearch") + # Results and Info + self.html_report_passed = False + self.errors_diff_passed = False + self.db_diff_passed = False + self.overall_passed = False + # Ingest info + self.total_test_time = "" + self.start_date = "" + self.end_date = "" + self.total_ingest_time = "" + self.artifact_count = 0 + self.artifact_fail = 0 + self.heap_space = "" + self.service_times = "" + self.autopsy_version = "" + self.ingest_messages = 0 + self.indexed_files = 0 + self.indexed_chunks = 0 + # Error tracking + self.printerror = [] + self.printout = [] + + def ant_to_string(self): + string = "" + for arg in self.ant: + string += (arg + " ") + return string + + def get_db_path(self, db_type): + """Get the path to the database file that corresponds to the given DBType. + + Args: + DBType: the DBType of the path to be generated. + """ + if(db_type == DBType.GOLD): + db_path = make_path(self.gold_data_dir, DB_FILENAME) + elif(db_type == DBType.OUTPUT): + db_path = make_path(self.main_config.output_dir, self.image_name, AUTOPSY_TEST_CASE, DB_FILENAME) + else: + db_path = make_path(self.main_config.output_dir, self.image_name, AUTOPSY_TEST_CASE, BACKUP_DB_FILENAME) + return db_path + + def get_html_report_path(self, html_type): + """Get the path to the HTML Report folder that corresponds to the given DBType. + + Args: + DBType: the DBType of the path to be generated. + """ + if(html_type == DBType.GOLD): + return make_path(self.gold_data_dir, "Report") + else: + # Autopsy creates an HTML report folder in the form AutopsyTestCase DATE-TIME + # It's impossible to get the exact time the folder was created, but the folder + # we are looking for is the only one in the self.reports_dir folder + html_path = "" + for fs in os.listdir(self.reports_dir): + html_path = make_path(self.reports_dir, fs) + if os.path.isdir(html_path): + break + return make_path(html_path, os.listdir(html_path)[0]) + + def get_sorted_data_path(self, file_type): + """Get the path to the SortedData file that corresponds to the given DBType. + + Args: + file_type: the DBType of the path to be generated + """ + return self._get_path_to_file(file_type, "SortedData.txt") + + def get_sorted_errors_path(self, file_type): + """Get the path to the SortedErrors file that correspodns to the given + DBType. + + Args: + file_type: the DBType of the path to be generated + """ + return self._get_path_to_file(file_type, "SortedErrors.txt") + + def get_db_dump_path(self, file_type): + """Get the path to the DBDump file that corresponds to the given DBType. + + Args: + file_type: the DBType of the path to be generated + """ + return self._get_path_to_file(file_type, "DBDump.txt") + + def _get_path_to_file(self, file_type, file_name): + """Get the path to the specified file with the specified type. + + Args: + file_type: the DBType of the path to be generated + file_name: a String, the filename of the path to be generated + """ + full_filename = self.image_name + file_name + if(file_type == DBType.GOLD): + return make_path(self.gold_data_dir, full_filename) + else: + return make_path(self.output_path, full_filename) + + +class TestConfiguration(object): + """Container for test configuration data. + + The Master Test Configuration. Encapsulates consolidated high level input from + config XML file and command-line arguments. + + Attributes: + args: an Args, the command line arguments + output_dir: a pathto_Dir, the output directory + input_dir: a pathto_Dir, the input directory + gold: a pathto_Dir, the gold directory + img_gold: a pathto_Dir, the temp directory where gold images are unzipped to + csv: a pathto_File, the local csv file + global_csv: a pathto_File, the global csv file + html_log: a pathto_File + known_bad_path: + keyword_path: + nsrl_path: + build_path: a pathto_File, the ant build file which runs the tests + autopsy_version: + ingest_messages: a Nat, number of ingest messages + indexed_files: a Nat, the number of indexed files + indexed_chunks: a Nat, the number of indexed chunks + timer: + images: a listof_Image, the images to be tested + timeout: a Nat, the amount of time before killing the test + ant: a listof_String, the ant command to run the tests + """ + + def __init__(self, args): + """Inits TestConfiguration and loads a config file if available. + + Args: + args: an Args, the command line arguments. + """ + self.args = args + # Paths: + self.output_dir = "" + self.input_dir = make_local_path("..","input") + self.gold = make_path("..", "output", "gold") + self.img_gold = make_path(self.gold, 'tmp') + # Logs: + self.csv = "" + self.global_csv = "" + self.html_log = "" + # Ant info: + self.known_bad_path = make_path(self.input_dir, "notablehashes.txt-md5.idx") + self.keyword_path = make_path(self.input_dir, "notablekeywords.xml") + self.nsrl_path = make_path(self.input_dir, "nsrl.txt-md5.idx") + self.build_path = make_path("..", "build.xml") + # Infinite Testing info + timer = 0 + self.images = [] + # Email info + self.email_enabled = args.email_enabled + self.mail_server = "" + self.mail_to = "" + self.mail_subject = "" + # Set the timeout to something huge + # The entire tester should not timeout before this number in ms + # However it only seems to take about half this time + # And it's very buggy, so we're being careful + self.timeout = 24 * 60 * 60 * 1000 * 1000 + + if not self.args.single: + self._load_config_file(self.args.config_file) + else: + self.images.append(self.args.single_file) + self._init_logs() + #self._init_imgs() + #self._init_build_info() + + + def _load_config_file(self, config_file): + """Updates this TestConfiguration's attributes from the config file. + + Initializes this TestConfiguration by iterating through the XML config file + command-line argument. Populates self.images and optional email configuration + + Args: + config_file: ConfigFile - the configuration file to load + """ + try: + count = 0 + parsed_config = parse(config_file) + logres = [] + counts = {} + if parsed_config.getElementsByTagName("indir"): + self.input_dir = parsed_config.getElementsByTagName("indir")[0].getAttribute("value").encode().decode("utf_8") + if parsed_config.getElementsByTagName("global_csv"): + self.global_csv = parsed_config.getElementsByTagName("global_csv")[0].getAttribute("value").encode().decode("utf_8") + self.global_csv = make_local_path(self.global_csv) + if parsed_config.getElementsByTagName("golddir"): + self.gold = parsed_config.getElementsByTagName("golddir")[0].getAttribute("value").encode().decode("utf_8") + self.img_gold = make_path(self.gold, 'tmp') + + self._init_imgs(parsed_config) + self._init_build_info(parsed_config) + self._init_email_info(parsed_config) + + except IOError as e: + msg = "There was an error loading the configuration file.\n" + msg += "\t" + str(e) + Errors.add_email_msg(msg) + logging.critical(traceback.format_exc()) + print(traceback.format_exc()) + + def _init_logs(self): + """Setup output folder, logs, and reporting infrastructure.""" + if(not dir_exists(make_path("..", "output", "results"))): + os.makedirs(make_path("..", "output", "results",)) + self.output_dir = make_path("..", "output", "results", time.strftime("%Y.%m.%d-%H.%M.%S")) + os.makedirs(self.output_dir) + self.csv = make_local_path(self.output_dir, "CSV.txt") + self.html_log = make_path(self.output_dir, "AutopsyTestCase.html") + log_name = self.output_dir + "\\regression.log" + logging.basicConfig(filename=log_name, level=logging.DEBUG) + + def _init_build_info(self, parsed_config): + """Initializes paths that point to information necessary to run the AutopsyIngest.""" + build_elements = parsed_config.getElementsByTagName("build") + if build_elements: + build_element = build_elements[0] + build_path = build_element.getAttribute("value").encode().decode("utf_8") + self.build_path = build_path + + def _init_imgs(self, parsed_config): + """Initialize the list of images to run tests on.""" + for element in parsed_config.getElementsByTagName("image"): + value = element.getAttribute("value").encode().decode("utf_8") + print ("Image in Config File: " + value) + if file_exists(value): + self.images.append(value) + else: + msg = "File: " + value + " doesn't exist" + Errors.print_error(msg) + Errors.add_email_msg(msg) + image_count = len(self.images) + + # Sanity check to see if there are obvious gold images that we are not testing + gold_count = 0 + for file in os.listdir(self.gold): + if not(file == 'tmp'): + gold_count+=1 + + if (image_count > gold_count): + print("******Alert: There are more input images than gold standards, some images will not be properly tested.\n") + elif (image_count < gold_count): + print("******Alert: There are more gold standards than input images, this will not check all gold Standards.\n") + + def _init_email_info(self, parsed_config): + """Initializes email information dictionary""" + email_elements = parsed_config.getElementsByTagName("email") + if email_elements: + mail_to = email_elements[0] + self.mail_to = mail_to.getAttribute("value").encode().decode("utf_8") + mail_server_elements = parsed_config.getElementsByTagName("mail_server") + if mail_server_elements: + mail_from = mail_server_elements[0] + self.mail_server = mail_from.getAttribute("value").encode().decode("utf_8") + subject_elements = parsed_config.getElementsByTagName("subject") + if subject_elements: + subject = subject_elements[0] + self.mail_subject = subject.getAttribute("value").encode().decode("utf_8") + if self.mail_server and self.mail_to and self.args.email_enabled: + self.email_enabled = True + print("Email will be sent to ", self.mail_to) + else: + print("No email will be sent.") + + +#-------------------------------------------------# +# Functions relating to comparing outputs # +#-------------------------------------------------# +class TestResultsDiffer(object): + """Compares results for a single test.""" + + def run_diff(test_data): + """Compares results for a single test. + + Args: + test_data: the TestData to use. + databaseDiff: TskDbDiff object created based off test_data + """ + try: + output_db = test_data.get_db_path(DBType.OUTPUT) + gold_db = test_data.get_db_path(DBType.GOLD) + output_dir = test_data.output_path + gold_bb_dump = test_data.get_sorted_data_path(DBType.GOLD) + gold_dump = test_data.get_db_dump_path(DBType.GOLD) + test_data.db_diff_pass = all(TskDbDiff(output_db, gold_db, output_dir=output_dir, gold_bb_dump=gold_bb_dump, + gold_dump=gold_dump).run_diff()) + + # Compare Exceptions + # replace is a fucntion that replaces strings of digits with 'd' + # this is needed so dates and times will not cause the diff to fail + replace = lambda file: re.sub(re.compile("\d"), "d", file) + output_errors = test_data.get_sorted_errors_path(DBType.OUTPUT) + gold_errors = test_data.get_sorted_errors_path(DBType.GOLD) + passed = TestResultsDiffer._compare_text(output_errors, gold_errors, + replace) + test_data.errors_diff_passed = passed + + # Compare html output + gold_report_path = test_data.get_html_report_path(DBType.GOLD) + output_report_path = test_data.get_html_report_path(DBType.OUTPUT) + passed = TestResultsDiffer._html_report_diff(gold_report_path, + output_report_path) + test_data.html_report_passed = passed + + # Clean up tmp folder + del_dir(test_data.gold_data_dir) + + except sqlite3.OperationalError as e: + Errors.print_error("Tests failed while running the diff:\n") + Errors.print_error(str(e)) + except TskDbDiffException as e: + Errors.print_error(str(e)) + except Exception as e: + Errors.print_error("Tests failed due to an error, try rebuilding or creating gold standards.\n") + Errors.print_error(str(e) + "\n") + print(traceback.format_exc()) + + def _compare_text(output_file, gold_file, process=None): + """Compare two text files. + + Args: + output_file: a pathto_File, the output text file + gold_file: a pathto_File, the input text file + pre-process: (optional) a function of String -> String that will be + called on each input file before the diff, if specified. + """ + if(not file_exists(output_file)): + return False + output_data = codecs.open(output_file, "r", "utf_8").read() + gold_data = codecs.open(gold_file, "r", "utf_8").read() + + if process is not None: + output_data = process(output_data) + gold_data = process(gold_data) + + if (not(gold_data == output_data)): + diff_path = os.path.splitext(os.path.basename(output_file))[0] + diff_path += "-Diff.txt" + diff_file = codecs.open(diff_path, "wb", "utf_8") + dffcmdlst = ["diff", output_file, gold_file] + subprocess.call(dffcmdlst, stdout = diff_file) + Errors.add_email_attachment(diff_path) + msg = "There was a difference in " + msg += os.path.basename(output_file) + ".\n" + Errors.add_email_msg(msg) + Errors.print_error(msg) + return False + else: + return True + + def _html_report_diff(gold_report_path, output_report_path): + """Compare the output and gold html reports. + + Args: + gold_report_path: a pathto_Dir, the gold HTML report directory + output_report_path: a pathto_Dir, the output HTML report directory + + Returns: + true, if the reports match, false otherwise. + """ + try: + gold_html_files = get_files_by_ext(gold_report_path, ".html") + output_html_files = get_files_by_ext(output_report_path, ".html") + + #ensure both reports have the same number of files and are in the same order + if(len(gold_html_files) != len(output_html_files)): + msg = "The reports did not have the same number or files." + msg += "One of the reports may have been corrupted." + Errors.print_error(msg) + else: + gold_html_files.sort() + output_html_files.sort() + + total = {"Gold": 0, "New": 0} + for gold, output in zip(gold_html_files, output_html_files): + count = TestResultsDiffer._compare_report_files(gold, output) + total["Gold"] += count[0] + total["New"] += count[1] + + okay = "The test report matches the gold report." + errors=["Gold report had " + str(total["Gold"]) +" errors", "New report had " + str(total["New"]) + " errors."] + print_report(errors, "REPORT COMPARISON", okay) + + if total["Gold"] == total["New"]: + return True + else: + Errors.print_error("The reports did not match each other.\n " + errors[0] +" and the " + errors[1]) + return False + except OSError as e: + e.print_error() + return False + except Exception as e: + Errors.print_error("Error: Unknown fatal error comparing reports.") + Errors.print_error(str(e) + "\n") + logging.critical(traceback.format_exc()) + return False + + def _compare_report_files(a_path, b_path): + """Compares the two specified report html files. + + Args: + a_path: a pathto_File, the first html report file + b_path: a pathto_File, the second html report file + + Returns: + a tuple of (Nat, Nat), which represent the length of each + unordered list in the html report files, or (0, 0) if the + lenghts are the same. + """ + a_file = open(a_path) + b_file = open(b_path) + a = a_file.read() + b = b_file.read() + a = a[a.find("
        "):] + b = b[b.find("
          "):] + + a_list = TestResultsDiffer._split(a, 50) + b_list = TestResultsDiffer._split(b, 50) + if not len(a_list) == len(b_list): + ex = (len(a_list), len(b_list)) + return ex + else: + return (0, 0) + + # Split a string into an array of string of the given size + def _split(input, size): + return [input[start:start+size] for start in range(0, len(input), size)] + + +class Reports(object): + def generate_reports(test_data): + """Generate the reports for a single test + + Args: + test_data: the TestData + """ + Reports._generate_html(test_data) + if test_data.main_config.global_csv: + Reports._generate_csv(test_data.main_config.global_csv, test_data) + else: + Reports._generate_csv(test_data.main_config.csv, test_data) + + def _generate_html(test_data): + """Generate the HTML log file.""" + # If the file doesn't exist yet, this is the first test_config to run for + # this test, so we need to make the start of the html log + html_log = test_data.main_config.html_log + if not file_exists(html_log): + Reports.write_html_head() + with open(html_log, "a") as html: + # The image title + title = "

          " + test_data.image_name + " \ + tested on " + socket.gethostname() + "

          \ +

          \ + Errors and Warnings |\ + Information |\ + General Output |\ + Logs\ +

          " + # The script errors found + if not test_data.overall_passed: + ids = 'errors1' + else: + ids = 'errors' + errors = "
          \ +

          Errors and Warnings

          \ +
          " + # For each error we have logged in the test_config + for error in test_data.printerror: + # Replace < and > to avoid any html display errors + errors += "

          " + error.replace("<", "<").replace(">", ">") + "

          " + # If there is a \n, we probably want a
          in the html + if "\n" in error: + errors += "
          " + errors += "
          " + + # Links to the logs + logs = "
          \ +

          Logs

          \ +
          " + logs_path = test_data.logs_dir + for file in os.listdir(logs_path): + logs += "

          " + file + "

          " + logs += "
          " + + # All the testing information + info = "
          \ +

          Information

          \ +
          \ + " + # The individual elements + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" + info += "" +# info += "" +# info += "" +# info += "" +# info += "" +# info += "" +# info += "" + info += "
          Image Path:" + test_data.image_file + "
          Image Name:" + test_data.image_name + "
          test_config Output Directory:" + test_data.main_config.output_dir + "
          Autopsy Version:" + test_data.autopsy_version + "
          Heap Space:" + test_data.heap_space + "
          Test Start Date:" + test_data.start_date + "
          Test End Date:" + test_data.end_date + "
          Total Test Time:" + test_data.total_test_time + "
          Total Ingest Time:" + test_data.total_ingest_time + "
          Exceptions Count:" + str(len(get_exceptions(test_data))) + "
          Autopsy OutOfMemoryExceptions:" + str(len(search_logs("OutOfMemoryException", test_data))) + "
          Autopsy OutOfMemoryErrors:" + str(len(search_logs("OutOfMemoryError", test_data))) + "
          Tika OutOfMemoryErrors/Exceptions:" + str(Reports._get_num_memory_errors("tika", test_data)) + "
          Solr OutOfMemoryErrors/Exceptions:" + str(Reports._get_num_memory_errors("solr", test_data)) + "
          TskCoreExceptions:" + str(len(search_log_set("autopsy", "TskCoreException", test_data))) + "
          TskDataExceptions:" + str(len(search_log_set("autopsy", "TskDataException", test_data))) + "
          Ingest Messages Count:" + str(test_data.ingest_messages) + "
          Indexed Files Count:" + str(test_data.indexed_files) + "
          Indexed File Chunks Count:" + str(test_data.indexed_chunks) + "
          Out Of Disk Space:\ +

          (will skew other test results)

          " + str(len(search_log_set("autopsy", "Stopping ingest due to low disk space on disk", test_data))) + "
          TSK Objects Count:" + str(test_data.db_diff_results.output_objs) + "
          Artifacts Count:" + str(test_data.db_diff_results.output_artifacts)+ "
          Attributes Count:" + str(test_data.db_diff_results.output_attrs) + "
          \ +
          " + # For all the general print statements in the test_config + output = "
          \ +

          General Output

          \ +
          " + # For each printout in the test_config's list + for out in test_data.printout: + output += "

          " + out + "

          " + # If there was a \n it probably means we want a
          in the html + if "\n" in out: + output += "
          " + output += "
          " + + html.write(title) + html.write(errors) + html.write(info) + html.write(logs) + html.write(output) + + def write_html_head(html_log): + """Write the top of the HTML log file. + + Args: + html_log: a pathto_File, the global HTML log + """ + with open(str(html_log), "a") as html: + head = "\ + \ + AutopsyTesttest_config Output\ + \ + \ + " + html.write(head) + + def write_html_foot(html_log): + """Write the bottom of the HTML log file. + + Args: + html_log: a pathto_File, the global HTML log + """ + with open(html_log, "a") as html: + head = "" + html.write(head) + + def html_add_images(html_log, full_image_names): + """Add all the image names to the HTML log. + + Args: + full_image_names: a listof_String, each representing an image name + html_log: a pathto_File, the global HTML log + """ + # If the file doesn't exist yet, this is the first test_config to run for + # this test, so we need to make the start of the html log + if not file_exists(html_log): + Reports.write_html_head(html_log) + with open(html_log, "a") as html: + links = [] + for full_name in full_image_names: + name = get_image_name(full_name) + links.append("" + name + "") + html.write("

          " + (" | ".join(links)) + "

          ") + + def _generate_csv(csv_path, test_data): + """Generate the CSV log file""" + # If the CSV file hasn't already been generated, this is the + # first run, and we need to add the column names + if not file_exists(csv_path): + Reports.csv_header(csv_path) + # Now add on the fields to a new row + with open(csv_path, "a") as csv: + # Variables that need to be written + vars = [] + vars.append( test_data.image_file ) + vars.append( test_data.image_name ) + vars.append( test_data.main_config.output_dir ) + vars.append( socket.gethostname() ) + vars.append( test_data.autopsy_version ) + vars.append( test_data.heap_space ) + vars.append( test_data.start_date ) + vars.append( test_data.end_date ) + vars.append( test_data.total_test_time ) + vars.append( test_data.total_ingest_time ) + vars.append( test_data.service_times ) + vars.append( str(len(get_exceptions(test_data))) ) + vars.append( str(Reports._get_num_memory_errors("autopsy", test_data)) ) + vars.append( str(Reports._get_num_memory_errors("tika", test_data)) ) + vars.append( str(Reports._get_num_memory_errors("solr", test_data)) ) + vars.append( str(len(search_log_set("autopsy", "TskCoreException", test_data))) ) + vars.append( str(len(search_log_set("autopsy", "TskDataException", test_data))) ) + vars.append( str(test_data.ingest_messages) ) + vars.append( str(test_data.indexed_files) ) + vars.append( str(test_data.indexed_chunks) ) + vars.append( str(len(search_log_set("autopsy", "Stopping ingest due to low disk space on disk", test_data))) ) +# vars.append( str(test_data.db_diff_results.output_objs) ) +# vars.append( str(test_data.db_diff_results.output_artifacts) ) +# vars.append( str(test_data.db_diff_results.output_objs) ) + vars.append( make_local_path("gold", test_data.image_name, DB_FILENAME) ) +# vars.append( test_data.db_diff_results.get_artifact_comparison() ) +# vars.append( test_data.db_diff_results.get_attribute_comparison() ) + vars.append( make_local_path("gold", test_data.image_name, "standard.html") ) + vars.append( str(test_data.html_report_passed) ) + vars.append( test_data.ant_to_string() ) + # Join it together with a ", " + output = "|".join(vars) + output += "\n" + # Write to the log! + csv.write(output) + + def csv_header(csv_path): + """Generate the CSV column names.""" + with open(csv_path, "w") as csv: + titles = [] + titles.append("Image Path") + titles.append("Image Name") + titles.append("Output test_config Directory") + titles.append("Host Name") + titles.append("Autopsy Version") + titles.append("Heap Space Setting") + titles.append("Test Start Date") + titles.append("Test End Date") + titles.append("Total Test Time") + titles.append("Total Ingest Time") + titles.append("Service Times") + titles.append("Autopsy Exceptions") + titles.append("Autopsy OutOfMemoryErrors/Exceptions") + titles.append("Tika OutOfMemoryErrors/Exceptions") + titles.append("Solr OutOfMemoryErrors/Exceptions") + titles.append("TskCoreExceptions") + titles.append("TskDataExceptions") + titles.append("Ingest Messages Count") + titles.append("Indexed Files Count") + titles.append("Indexed File Chunks Count") + titles.append("Out Of Disk Space") +# titles.append("Tsk Objects Count") +# titles.append("Artifacts Count") +# titles.append("Attributes Count") + titles.append("Gold Database Name") +# titles.append("Artifacts Comparison") +# titles.append("Attributes Comparison") + titles.append("Gold Report Name") + titles.append("Report Comparison") + titles.append("Ant Command Line") + output = "|".join(titles) + output += "\n" + csv.write(output) + + def _get_num_memory_errors(type, test_data): + """Get the number of OutOfMemory errors and Exceptions. + + Args: + type: a String representing the type of log to check. + test_data: the TestData to examine. + """ + return (len(search_log_set(type, "OutOfMemoryError", test_data)) + + len(search_log_set(type, "OutOfMemoryException", test_data))) + +class Logs(object): + + def generate_log_data(test_data): + """Find and handle relevent data from the Autopsy logs. + + Args: + test_data: the TestData whose logs to examine + """ + Logs._generate_common_log(test_data) + try: + Logs._fill_ingest_data(test_data) + except Exception as e: + Errors.print_error("Error: Unknown fatal error when filling test_config data.") + Errors.print_error(str(e) + "\n") + logging.critical(traceback.format_exc()) + # If running in verbose mode (-v) + if test_data.main_config.args.verbose: + errors = Logs._report_all_errors() + okay = "No warnings or errors in any log files." + print_report(errors, "VERBOSE", okay) + + def _generate_common_log(test_data): + """Generate the common log, the log of all exceptions and warnings from + each log file generated by Autopsy. + + Args: + test_data: the TestData to generate a log for + """ + try: + logs_path = test_data.logs_dir + common_log = codecs.open(test_data.common_log_path, "w", "utf_8") + warning_log = codecs.open(test_data.warning_log, "w", "utf_8") + common_log.write("--------------------------------------------------\n") + common_log.write(test_data.image_name + "\n") + common_log.write("--------------------------------------------------\n") + rep_path = make_local_path(test_data.main_config.output_dir) + rep_path = rep_path.replace("\\\\", "\\") + for file in os.listdir(logs_path): + log = codecs.open(make_path(logs_path, file), "r", "utf_8") + for line in log: + line = line.replace(rep_path, "test_data") + if line.startswith("Exception"): + common_log.write(file +": " + line) + elif line.startswith("Error"): + common_log.write(file +": " + line) + elif line.startswith("SEVERE"): + common_log.write(file +":" + line) + else: + warning_log.write(file +": " + line) + log.close() + common_log.write("\n") + common_log.close() + print(test_data.sorted_log) + srtcmdlst = ["sort", test_data.common_log_path, "-o", test_data.sorted_log] + subprocess.call(srtcmdlst) + except (OSError, IOError) as e: + Errors.print_error("Error: Unable to generate the common log.") + Errors.print_error(str(e) + "\n") + Errors.print_error(traceback.format_exc()) + logging.critical(traceback.format_exc()) + + def _fill_ingest_data(test_data): + """Fill the TestDatas variables that require the log files. + + Args: + test_data: the TestData to modify + """ + try: + # Open autopsy.log.0 + log_path = make_path(test_data.logs_dir, "autopsy.log.0") + log = open(log_path) + + # Set the TestData start time based off the first line of autopsy.log.0 + # *** If logging time format ever changes this will break *** + test_data.start_date = log.readline().split(" org.")[0] + + # Set the test_data ending time based off the "create" time (when the file was copied) + test_data.end_date = time.ctime(os.path.getmtime(log_path)) + except IOError as e: + Errors.print_error("Error: Unable to open autopsy.log.0.") + Errors.print_error(str(e) + "\n") + logging.warning(traceback.format_exc()) + # Start date must look like: "Jul 16, 2012 12:57:53 PM" + # End date must look like: "Mon Jul 16 13:02:42 2012" + # *** If logging time format ever changes this will break *** + start = datetime.datetime.strptime(test_data.start_date, "%b %d, %Y %I:%M:%S %p") + end = datetime.datetime.strptime(test_data.end_date, "%a %b %d %H:%M:%S %Y") + test_data.total_test_time = str(end - start) + + try: + # Set Autopsy version, heap space, ingest time, and service times + + version_line = search_logs("INFO: Application name: Autopsy, version:", test_data)[0] + test_data.autopsy_version = get_word_at(version_line, 5).rstrip(",") + + test_data.heap_space = search_logs("Heap memory usage:", test_data)[0].rstrip().split(": ")[1] + + ingest_line = search_logs("Ingest (including enqueue)", test_data)[0] + test_data.total_ingest_time = get_word_at(ingest_line, 6).rstrip() + + message_line = search_log_set("autopsy", "Ingest messages count:", test_data)[0] + test_data.ingest_messages = int(message_line.rstrip().split(": ")[2]) + + files_line = search_log_set("autopsy", "Indexed files count:", test_data)[0] + test_data.indexed_files = int(files_line.rstrip().split(": ")[2]) + + chunks_line = search_log_set("autopsy", "Indexed file chunks count:", test_data)[0] + test_data.indexed_chunks = int(chunks_line.rstrip().split(": ")[2]) + except (OSError, IOError) as e: + Errors.print_error("Error: Unable to find the required information to fill test_config data.") + Errors.print_error(str(e) + "\n") + logging.critical(traceback.format_exc()) + print(traceback.format_exc()) + try: + service_lines = search_log("autopsy.log.0", "to process()", test_data) + service_list = [] + for line in service_lines: + words = line.split(" ") + # Kind of forcing our way into getting this data + # If this format changes, the tester will break + i = words.index("secs.") + times = words[i-4] + " " + times += words[i-3] + " " + times += words[i-2] + " " + times += words[i-1] + " " + times += words[i] + service_list.append(times) + test_data.service_times = "; ".join(service_list) + except (OSError, IOError) as e: + Errors.print_error("Error: Unknown fatal error when finding service times.") + Errors.print_error(str(e) + "\n") + logging.critical(traceback.format_exc()) + + def _report_all_errors(): + """Generate a list of all the errors found in the common log. + + Returns: + a listof_String, the errors found in the common log + """ + try: + return get_warnings() + get_exceptions() + except (OSError, IOError) as e: + Errors.print_error("Error: Unknown fatal error when reporting all errors.") + Errors.print_error(str(e) + "\n") + logging.warning(traceback.format_exc()) + + def search_common_log(string, test_data): + """Search the common log for any instances of a given string. + + Args: + string: the String to search for. + test_data: the TestData that holds the log to search. + + Returns: + a listof_String, all the lines that the string is found on + """ + results = [] + log = codecs.open(test_data.common_log_path, "r", "utf_8") + for line in log: + if string in line: + results.append(line) + log.close() + return results + + +def print_report(errors, name, okay): + """Print a report with the specified information. + + Args: + errors: a listof_String, the errors to report. + name: a String, the name of the report. + okay: the String to print when there are no errors. + """ + if errors: + Errors.print_error("--------< " + name + " >----------") + for error in errors: + Errors.print_error(str(error)) + Errors.print_error("--------< / " + name + " >--------\n") + else: + Errors.print_out("-----------------------------------------------------------------") + Errors.print_out("< " + name + " - " + okay + " />") + Errors.print_out("-----------------------------------------------------------------\n") + + +def get_exceptions(test_data): + """Get a list of the exceptions in the autopsy logs. + + Args: + test_data: the TestData to use to find the exceptions. + Returns: + a listof_String, the exceptions found in the logs. + """ + exceptions = [] + logs_path = test_data.logs_dir + results = [] + for file in os.listdir(logs_path): + if "autopsy.log" in file: + log = codecs.open(make_path(logs_path, file), "r", "utf_8") + ex = re.compile("\SException") + er = re.compile("\SError") + for line in log: + if ex.search(line) or er.search(line): + exceptions.append(line) + log.close() + return exceptions + +def get_warnings(test_data): + """Get a list of the warnings listed in the common log. + + Args: + test_data: the TestData to use to find the warnings + + Returns: + listof_String, the warnings found. + """ + warnings = [] + common_log = codecs.open(test_data.warning_log, "r", "utf_8") + for line in common_log: + if "warning" in line.lower(): + warnings.append(line) + common_log.close() + return warnings + +def copy_logs(test_data): + """Copy the Autopsy generated logs to output directory. + + Args: + test_data: the TestData whose logs will be copied + """ + try: + log_dir = os.path.join("..", "..", "Testing","build","test","qa-functional","work","userdir0","var","log") + shutil.copytree(log_dir, test_data.logs_dir) + except OSError as e: + printerror(test_data,"Error: Failed to copy the logs.") + printerror(test_data,str(e) + "\n") + logging.warning(traceback.format_exc()) + +def setDay(): + global Day + Day = int(strftime("%d", localtime())) + +def getLastDay(): + return Day + +def getDay(): + return int(strftime("%d", localtime())) + +def newDay(): + return getLastDay() != getDay() + +#------------------------------------------------------------# +# Exception classes to manage "acceptable" thrown exceptions # +# versus unexpected and fatal exceptions # +#------------------------------------------------------------# + +class FileNotFoundException(Exception): + """ + If a file cannot be found by one of the helper functions, + they will throw a FileNotFoundException unless the purpose + is to return False. + """ + def __init__(self, file): + self.file = file + self.strerror = "FileNotFoundException: " + file + + def print_error(self): + Errors.print_error("Error: File could not be found at:") + Errors.print_error(self.file + "\n") + + def error(self): + error = "Error: File could not be found at:\n" + self.file + "\n" + return error + +class DirNotFoundException(Exception): + """ + If a directory cannot be found by a helper function, + it will throw this exception + """ + def __init__(self, dir): + self.dir = dir + self.strerror = "DirNotFoundException: " + dir + + def print_error(self): + Errors.print_error("Error: Directory could not be found at:") + Errors.print_error(self.dir + "\n") + + def error(self): + error = "Error: Directory could not be found at:\n" + self.dir + "\n" + return error + + +class Errors: + """A class used to manage error reporting. + + Attributes: + printout: a listof_String, the non-error messages that were printed + printerror: a listof_String, the error messages that were printed + email_body: a String, the body of the report email + email_msg_prefix: a String, the prefix for lines added to the email + email_attchs: a listof_pathto_File, the files to be attached to the + report email + """ + printout = [] + printerror = [] + email_body = "" + email_msg_prefix = "Configuration" + email_attachs = [] + + def set_testing_phase(image_name): + """Change the email message prefix to be the given testing phase. + + Args: + image_name: a String, representing the current image being tested + """ + Errors.email_msg_prefix = image_name + + def print_out(msg): + """Print out an informational message. + + Args: + msg: a String, the message to be printed + """ + print(msg) + Errors.printout.append(msg) + + def print_error(msg): + """Print out an error message. + + Args: + msg: a String, the error message to be printed. + """ + print(msg) + Errors.printerror.append(msg) + + def clear_print_logs(): + """Reset the image-specific attributes of the Errors class.""" + Errors.printout = [] + Errors.printerror = [] + + def add_email_msg(msg): + """Add the given message to the body of the report email. + + Args: + msg: a String, the message to be added to the email + """ + Errors.email_body += Errors.email_msg_prefix + ":" + msg + + def add_email_attachment(path): + """Add the given file to be an attachment for the report email + + Args: + file: a pathto_File, the file to add + """ + Errors.email_attachs.append(path) + + +class DiffResults(object): + """Container for the results of the database diff tests. + + Stores artifact, object, and attribute counts and comparisons generated by + TskDbDiff. + + Attributes: + gold_attrs: a Nat, the number of gold attributes + output_attrs: a Nat, the number of output attributes + gold_objs: a Nat, the number of gold objects + output_objs: a Nat, the number of output objects + artifact_comp: a listof_String, describing the differences + attribute_comp: a listof_String, describing the differences + passed: a boolean, did the diff pass? + """ + def __init__(self, tsk_diff): + """Inits a DiffResults + + Args: + tsk_diff: a TskDBDiff + """ + self.gold_attrs = tsk_diff.gold_attributes + self.output_attrs = tsk_diff.autopsy_attributes + self.gold_objs = tsk_diff.gold_objects + self.output_objs = tsk_diff.autopsy_objects + self.artifact_comp = tsk_diff.artifact_comparison + self.attribute_comp = tsk_diff.attribute_comparison + self.gold_artifacts = len(tsk_diff.gold_artifacts) + self.output_artifacts = len(tsk_diff.autopsy_artifacts) + self.passed = tsk_diff.passed + + def get_artifact_comparison(self): + if not self.artifact_comp: + return "All counts matched" + else: + return "; ".join(self.artifact_comp) + + def get_attribute_comparison(self): + if not self.attribute_comp: + return "All counts matched" + list = [] + for error in self.attribute_comp: + list.append(error) + return ";".join(list) + + +#-------------------------------------------------------------# +# Parses argv and stores booleans to match command line input # +#-------------------------------------------------------------# +class Args(object): + """A container for command line options and arguments. + + Attributes: + single: a boolean indicating whether to run in single file mode + single_file: an Image to run the test on + rebuild: a boolean indicating whether to run in rebuild mode + list: a boolean indicating a config file was specified + unallocated: a boolean indicating unallocated space should be ignored + ignore: a boolean indicating the input directory should be ingnored + keep: a boolean indicating whether to keep the SOLR index + verbose: a boolean indicating whether verbose output should be printed + exeception: a boolean indicating whether errors containing exception + exception_string should be printed + exception_sring: a String representing and exception name + fr: a boolean indicating whether gold standard images will be downloaded + """ + def __init__(self): + self.single = False + self.single_file = "" + self.rebuild = False + self.list = False + self.config_file = "" + self.unallocated = False + self.ignore = False + self.keep = False + self.verbose = False + self.exception = False + self.exception_string = "" + self.fr = False + self.email_enabled = False + + def parse(self): + """Get the command line arguments and parse them.""" + nxtproc = [] + nxtproc.append("python3") + nxtproc.append(sys.argv.pop(0)) + while sys.argv: + arg = sys.argv.pop(0) + nxtproc.append(arg) + if(arg == "-f"): + #try: @@@ Commented out until a more specific except statement is added + arg = sys.argv.pop(0) + print("Running on a single file:") + print(path_fix(arg) + "\n") + self.single = True + self.single_file = path_fix(arg) + #except: + # print("Error: No single file given.\n") + # return False + elif(arg == "-r" or arg == "--rebuild"): + print("Running in rebuild mode.\n") + self.rebuild = True + elif(arg == "-l" or arg == "--list"): + try: + arg = sys.argv.pop(0) + nxtproc.append(arg) + print("Running from configuration file:") + print(arg + "\n") + self.list = True + self.config_file = arg + except: + print("Error: No configuration file given.\n") + return False + elif(arg == "-u" or arg == "--unallocated"): + print("Ignoring unallocated space.\n") + self.unallocated = True + elif(arg == "-k" or arg == "--keep"): + print("Keeping the Solr index.\n") + self.keep = True + elif(arg == "-v" or arg == "--verbose"): + print("Running in verbose mode:") + print("Printing all thrown exceptions.\n") + self.verbose = True + elif(arg == "-e" or arg == "--exception"): + try: + arg = sys.argv.pop(0) + nxtproc.append(arg) + print("Running in exception mode: ") + print("Printing all exceptions with the string '" + arg + "'\n") + self.exception = True + self.exception_string = arg + except: + print("Error: No exception string given.") + elif arg == "-h" or arg == "--help": + print(usage()) + return False + elif arg == "-fr" or arg == "--forcerun": + print("Not downloading new images") + self.fr = True + elif arg == "-e" or arg == "-email": + self.email_enabled = True + else: + print(usage()) + return False + # Return the args were sucessfully parsed + return self._sanity_check() + + def _sanity_check(self): + """Check to make sure there are no conflicting arguments and the + specified files exist. + + Returns: + False if there are conflicting arguments or a specified file does + not exist, True otherwise + """ + if self.single and self.list: + print("Cannot run both from config file and on a single file.") + return False + if self.list: + if not file_exists(self.config_file): + print("Configuration file does not exist at:", + self.config_file) + return False + elif self.single: + if not file_exists(self.single_file): + msg = "Image file does not exist at: " + self.single_file + return False + if (not self.single) and (not self.ignore) and (not self.list): + self.config_file = "config.xml" + if not file_exists(self.config_file): + msg = "Configuration file does not exist at: " + self.config_file + return False + + return True + +#### +# Helper Functions +#### +def search_logs(string, test_data): + """Search through all the known log files for a given string. + + Args: + string: the String to search for. + test_data: the TestData that holds the logs to search. + + Returns: + a listof_String, the lines that contained the given String. + """ + logs_path = test_data.logs_dir + results = [] + for file in os.listdir(logs_path): + log = codecs.open(make_path(logs_path, file), "r", "utf_8") + for line in log: + if string in line: + results.append(line) + log.close() + return results + +def search_log(log, string, test_data): + """Search the given log for any instances of a given string. + + Args: + log: a pathto_File, the log to search in + string: the String to search for. + test_data: the TestData that holds the log to search. + + Returns: + a listof_String, all the lines that the string is found on + """ + logs_path = make_path(test_data.logs_dir, log) + try: + results = [] + log = codecs.open(logs_path, "r", "utf_8") + for line in log: + if string in line: + results.append(line) + log.close() + if results: + return results + except: + raise FileNotFoundException(logs_path) + +# Search through all the the logs of the given type +# Types include autopsy, tika, and solr +def search_log_set(type, string, test_data): + """Search through all logs to the given type for the given string. + + Args: + type: the type of log to search in. + string: the String to search for. + test_data: the TestData containing the logs to search. + + Returns: + a listof_String, the lines on which the String was found. + """ + logs_path = test_data.logs_dir + results = [] + for file in os.listdir(logs_path): + if type in file: + log = codecs.open(make_path(logs_path, file), "r", "utf_8") + for line in log: + if string in line: + results.append(line) + log.close() + return results + + +def clear_dir(dir): + """Clears all files from a directory and remakes it. + + Args: + dir: a pathto_Dir, the directory to clear + """ + try: + if dir_exists(dir): + shutil.rmtree(dir) + os.makedirs(dir) + return True; + except OSError as e: + printerror(test_data,"Error: Cannot clear the given directory:") + printerror(test_data,dir + "\n") + print(str(e)) + return False; + +def del_dir(dir): + """Delete the given directory. + + Args: + dir: a pathto_Dir, the directory to delete + """ + try: + if dir_exists(dir): + shutil.rmtree(dir) + return True; + except: + printerror(test_data,"Error: Cannot delete the given directory:") + printerror(test_data,dir + "\n") + return False; + +def get_file_in_dir(dir, ext): + """Returns the first file in the given directory with the given extension. + + Args: + dir: a pathto_Dir, the directory to search + ext: a String, the extension to search for + + Returns: + pathto_File, the file that was found + """ + try: + for file in os.listdir(dir): + if file.endswith(ext): + return make_path(dir, file) + # If nothing has been found, raise an exception + raise FileNotFoundException(dir) + except: + raise DirNotFoundException(dir) + +def find_file_in_dir(dir, name, ext): + """Find the file with the given name in the given directory. + + Args: + dir: a pathto_Dir, the directory to search + name: a String, the basename of the file to search for + ext: a String, the extension of the file to search for + """ + try: + for file in os.listdir(dir): + if file.startswith(name): + if file.endswith(ext): + return make_path(dir, file) + raise FileNotFoundException(dir) + except: + raise DirNotFoundException(dir) + + +class OS: + LINUX, MAC, WIN, CYGWIN = range(4) + + +if __name__ == "__main__": + global SYS + if _platform == "linux" or _platform == "linux2": + SYS = OS.LINUX + elif _platform == "darwin": + SYS = OS.MAC + elif _platform == "win32": + SYS = OS.WIN + elif _platform == "cygwin": + SYS = OS.CYGWIN + + if SYS is OS.WIN or SYS is OS.CYGWIN: + main() + else: + print("We only support Windows and Cygwin at this time.") diff --git a/test/script/srcupdater.py b/test/script/srcupdater.py index 99a393d9eb..c8c7d5410b 100644 --- a/test/script/srcupdater.py +++ b/test/script/srcupdater.py @@ -1,187 +1,187 @@ -import codecs -import datetime -import logging -import os -import re -import shutil -import socket -import sqlite3 -import subprocess -import sys -from sys import platform as _platform -import time -import traceback -import xml -from xml.dom.minidom import parse, parseString -import Emailer -from regression_utils import * - -def compile(errore, attachli, parsedin): - global redo - global tryredo - global failedbool - global errorem - errorem = errore - global attachl - attachl = attachli - global passed - global parsed - parsed = parsedin - passed = True - tryredo = False - redo = True - while(redo): - passed = True - if(passed): - gitPull("sleuthkit") - if(passed): - vsBuild() - if(passed): - gitPull("autopsy") - if(passed): - antBuild("datamodel", False) - if(passed): - antBuild("autopsy", True) - if(passed): - redo = False - else: - print("Compile Failed") - time.sleep(3600) - attachl = [] - errorem = "The test standard didn't match the gold standard.\n" - failedbool = False - if(tryredo): - errorem = "" - errorem += "Rebuilt properly.\n" - Emailer.send_email(parsed, errorem, attachl, True) - attachl = [] - passed = True - -#Pulls from git -def gitPull(TskOrAutopsy): - global SYS - global errorem - global attachl - ccwd = "" - gppth = make_local_path("..", "GitPullOutput" + TskOrAutopsy + ".txt") - attachl.append(gppth) - gpout = open(gppth, 'a') - toPull = "https://www.github.com/sleuthkit/" + TskOrAutopsy - call = ["git", "pull", toPull] - if TskOrAutopsy == "sleuthkit": - ccwd = os.path.join("..", "..", "..", "sleuthkit") - else: - ccwd = os.path.join("..", "..") - subprocess.call(call, stdout=sys.stdout, cwd=ccwd) - gpout.close() - - -#Builds TSK as a win32 applicatiion -def vsBuild(): - global redo - global tryredo - global passed - global parsed - #Please ensure that the current working directory is $autopsy/testing/script - oldpath = os.getcwd() - os.chdir(os.path.join("..", "..", "..","sleuthkit", "win32")) - vs = [] - vs.append("/cygdrive/c/windows/microsoft.NET/framework/v4.0.30319/MSBuild.exe") - vs.append(os.path.join("Tsk-win.sln")) - vs.append("/p:configuration=release") - vs.append("/p:platform=win32") - vs.append("/t:clean") - vs.append("/t:rebuild") - print(vs) - VSpth = make_local_path("..", "VSOutput.txt") - VSout = open(VSpth, 'a') - subprocess.call(vs, stdout=VSout) - VSout.close() - os.chdir(oldpath) - chk = os.path.join("..", "..", "..","sleuthkit", "win32", "Release", "libtsk_jni.dll") - try: - open(chk) - except IOError as e: - global errorem - global attachl - if(not tryredo): - errorem += "LIBTSK C++ failed to build.\n" - attachl.append(VSpth) - send_email(parsed, errorem, attachl, False) - tryredo = True - passed = False - redo = True - - - -#Builds Autopsy or the Datamodel -def antBuild(which, Build): - global redo - global passed - global tryredo - global parsed - directory = os.path.join("..", "..") - ant = [] - if which == "datamodel": - directory = os.path.join("..", "..", "..", "sleuthkit", "bindings", "java") - ant.append("ant") - ant.append("-f") - ant.append(directory) - ant.append("clean") - if(Build): - ant.append("build") - else: - ant.append("dist") - antpth = make_local_path("..", "ant" + which + "Output.txt") - antout = open(antpth, 'a') - succd = subprocess.call(ant, stdout=antout) - antout.close() - global errorem - global attachl - if which == "datamodel": - chk = os.path.join("..", "..", "..","sleuthkit", "bindings", "java", "dist", "TSK_DataModel.jar") - try: - open(chk) - except IOError as e: - if(not tryredo): - errorem += "DataModel Java build failed.\n" - attachl.append(antpth) - Emailer.send_email(parsed, errorem, attachl, False) - passed = False - tryredo = True - elif (succd != 0 and (not tryredo)): - errorem += "Autopsy build failed.\n" - attachl.append(antpth) - Emailer.send_email(parsed, errorem, attachl, False) - tryredo = True - elif (succd != 0): - passed = False - - -def main(): - errore = "" - attachli = [] - config_file = "" - arg = sys.argv.pop(0) - arg = sys.argv.pop(0) - config_file = arg - parsedin = parse(config_file) - compile(errore, attachli, parsedin) - -class OS: - LINUX, MAC, WIN, CYGWIN = range(4) -if __name__ == "__main__": - global SYS - if _platform == "linux" or _platform == "linux2": - SYS = OS.LINUX - elif _platform == "darwin": - SYS = OS.MAC - elif _platform == "win32": - SYS = OS.WIN - elif _platform == "cygwin": - SYS = OS.CYGWIN - - if SYS is OS.WIN or SYS is OS.CYGWIN: - main() - else: - print("We only support Windows and Cygwin at this time.") +import codecs +import datetime +import logging +import os +import re +import shutil +import socket +import sqlite3 +import subprocess +import sys +from sys import platform as _platform +import time +import traceback +import xml +from xml.dom.minidom import parse, parseString +import Emailer +from regression_utils import * + +def compile(errore, attachli, parsedin): + global redo + global tryredo + global failedbool + global errorem + errorem = errore + global attachl + attachl = attachli + global passed + global parsed + parsed = parsedin + passed = True + tryredo = False + redo = True + while(redo): + passed = True + if(passed): + gitPull("sleuthkit") + if(passed): + vsBuild() + if(passed): + gitPull("autopsy") + if(passed): + antBuild("datamodel", False) + if(passed): + antBuild("autopsy", True) + if(passed): + redo = False + else: + print("Compile Failed") + time.sleep(3600) + attachl = [] + errorem = "The test standard didn't match the gold standard.\n" + failedbool = False + if(tryredo): + errorem = "" + errorem += "Rebuilt properly.\n" + Emailer.send_email(parsed, errorem, attachl, True) + attachl = [] + passed = True + +#Pulls from git +def gitPull(TskOrAutopsy): + global SYS + global errorem + global attachl + ccwd = "" + gppth = make_local_path("..", "GitPullOutput" + TskOrAutopsy + ".txt") + attachl.append(gppth) + gpout = open(gppth, 'a') + toPull = "https://www.github.com/sleuthkit/" + TskOrAutopsy + call = ["git", "pull", toPull] + if TskOrAutopsy == "sleuthkit": + ccwd = os.path.join("..", "..", "..", "sleuthkit") + else: + ccwd = os.path.join("..", "..") + subprocess.call(call, stdout=sys.stdout, cwd=ccwd) + gpout.close() + + +#Builds TSK as a win32 applicatiion +def vsBuild(): + global redo + global tryredo + global passed + global parsed + #Please ensure that the current working directory is $autopsy/testing/script + oldpath = os.getcwd() + os.chdir(os.path.join("..", "..", "..","sleuthkit", "win32")) + vs = [] + vs.append("/cygdrive/c/windows/microsoft.NET/framework/v4.0.30319/MSBuild.exe") + vs.append(os.path.join("Tsk-win.sln")) + vs.append("/p:configuration=release") + vs.append("/p:platform=win32") + vs.append("/t:clean") + vs.append("/t:rebuild") + print(vs) + VSpth = make_local_path("..", "VSOutput.txt") + VSout = open(VSpth, 'a') + subprocess.call(vs, stdout=VSout) + VSout.close() + os.chdir(oldpath) + chk = os.path.join("..", "..", "..","sleuthkit", "win32", "Release", "libtsk_jni.dll") + try: + open(chk) + except IOError as e: + global errorem + global attachl + if(not tryredo): + errorem += "LIBTSK C++ failed to build.\n" + attachl.append(VSpth) + send_email(parsed, errorem, attachl, False) + tryredo = True + passed = False + redo = True + + + +#Builds Autopsy or the Datamodel +def antBuild(which, Build): + global redo + global passed + global tryredo + global parsed + directory = os.path.join("..", "..") + ant = [] + if which == "datamodel": + directory = os.path.join("..", "..", "..", "sleuthkit", "bindings", "java") + ant.append("ant") + ant.append("-f") + ant.append(directory) + ant.append("clean") + if(Build): + ant.append("build") + else: + ant.append("dist") + antpth = make_local_path("..", "ant" + which + "Output.txt") + antout = open(antpth, 'a') + succd = subprocess.call(ant, stdout=antout) + antout.close() + global errorem + global attachl + if which == "datamodel": + chk = os.path.join("..", "..", "..","sleuthkit", "bindings", "java", "dist", "TSK_DataModel.jar") + try: + open(chk) + except IOError as e: + if(not tryredo): + errorem += "DataModel Java build failed.\n" + attachl.append(antpth) + Emailer.send_email(parsed, errorem, attachl, False) + passed = False + tryredo = True + elif (succd != 0 and (not tryredo)): + errorem += "Autopsy build failed.\n" + attachl.append(antpth) + Emailer.send_email(parsed, errorem, attachl, False) + tryredo = True + elif (succd != 0): + passed = False + + +def main(): + errore = "" + attachli = [] + config_file = "" + arg = sys.argv.pop(0) + arg = sys.argv.pop(0) + config_file = arg + parsedin = parse(config_file) + compile(errore, attachli, parsedin) + +class OS: + LINUX, MAC, WIN, CYGWIN = range(4) +if __name__ == "__main__": + global SYS + if _platform == "linux" or _platform == "linux2": + SYS = OS.LINUX + elif _platform == "darwin": + SYS = OS.MAC + elif _platform == "win32": + SYS = OS.WIN + elif _platform == "cygwin": + SYS = OS.CYGWIN + + if SYS is OS.WIN or SYS is OS.CYGWIN: + main() + else: + print("We only support Windows and Cygwin at this time.") diff --git a/thunderbirdparser/manifest.mf b/thunderbirdparser/manifest.mf index c16a2f4c01..fc34c0e90a 100644 --- a/thunderbirdparser/manifest.mf +++ b/thunderbirdparser/manifest.mf @@ -1,7 +1,7 @@ -Manifest-Version: 1.0 -AutoUpdate-Show-In-Client: true -OpenIDE-Module: org.sleuthkit.autopsy.thunderbirdparser/3 -OpenIDE-Module-Implementation-Version: 9 -OpenIDE-Module-Layer: org/sleuthkit/autopsy/thunderbirdparser/layer.xml -OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/thunderbirdparser/Bundle.properties - +Manifest-Version: 1.0 +AutoUpdate-Show-In-Client: true +OpenIDE-Module: org.sleuthkit.autopsy.thunderbirdparser/3 +OpenIDE-Module-Implementation-Version: 9 +OpenIDE-Module-Layer: org/sleuthkit/autopsy/thunderbirdparser/layer.xml +OpenIDE-Module-Localizing-Bundle: org/sleuthkit/autopsy/thunderbirdparser/Bundle.properties + diff --git a/thunderbirdparser/nbproject/project.properties b/thunderbirdparser/nbproject/project.properties index 6a243df466..0735c621fa 100644 --- a/thunderbirdparser/nbproject/project.properties +++ b/thunderbirdparser/nbproject/project.properties @@ -1,6 +1,6 @@ -javac.source=1.7 -javac.compilerargs=-Xlint -Xlint:-serial -license.file=../LICENSE-2.0.txt -nbm.homepage=http://www.sleuthkit.org/autopsy/ -nbm.needs.restart=true -spec.version.base=1.2 +javac.source=1.7 +javac.compilerargs=-Xlint -Xlint:-serial +license.file=../LICENSE-2.0.txt +nbm.homepage=http://www.sleuthkit.org/autopsy/ +nbm.needs.restart=true +spec.version.base=1.2 diff --git a/thunderbirdparser/nbproject/project.xml b/thunderbirdparser/nbproject/project.xml index aec76cd632..52a74cd1a7 100644 --- a/thunderbirdparser/nbproject/project.xml +++ b/thunderbirdparser/nbproject/project.xml @@ -1,31 +1,31 @@ - - - org.netbeans.modules.apisupport.project - - - org.sleuthkit.autopsy.thunderbirdparser - - - - org.sleuthkit.autopsy.core - - - - 9 - 7.0 - - - - org.sleuthkit.autopsy.keywordsearch - - - - 5 - 3.2 - - - - - - - + + + org.netbeans.modules.apisupport.project + + + org.sleuthkit.autopsy.thunderbirdparser + + + + org.sleuthkit.autopsy.core + + + + 9 + 7.0 + + + + org.sleuthkit.autopsy.keywordsearch + + + + 5 + 3.2 + + + + + + + diff --git a/thunderbirdparser/src/org/sleuthkit/autopsy/thunderbirdparser/ThunderbirdEmailParser.java b/thunderbirdparser/src/org/sleuthkit/autopsy/thunderbirdparser/ThunderbirdEmailParser.java index 11d2ca91b7..e130a5ba20 100644 --- a/thunderbirdparser/src/org/sleuthkit/autopsy/thunderbirdparser/ThunderbirdEmailParser.java +++ b/thunderbirdparser/src/org/sleuthkit/autopsy/thunderbirdparser/ThunderbirdEmailParser.java @@ -1,3 +1,21 @@ +/* + * Autopsy Forensic Browser + * + * Copyright 2011-2013 Basis Technology Corp. + * Contact: carrier sleuthkit org + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ package org.sleuthkit.autopsy.thunderbirdparser; import java.io.*; @@ -18,6 +36,10 @@ import org.apache.tika.sax.BodyContentHandler; import org.xml.sax.ContentHandler; import org.xml.sax.SAXException; +/** + * Parses an MBOX file. + * + */ public class ThunderbirdEmailParser { private InputStream stream; @@ -41,6 +63,10 @@ public class ThunderbirdEmailParser { this.tika = new Tika(); } + /** + * + * @param inStream String to MBX file + */ public ThunderbirdEmailParser(InputStream inStream) { this.tika = new Tika(); this.stream = inStream; @@ -61,11 +87,26 @@ public class ThunderbirdEmailParser { this.contentHandler = new BodyContentHandler(10*1024*1024); } + /** + * Parse data passed in via constructor + * @throws FileNotFoundException + * @throws IOException + * @throws SAXException + * @throws TikaException + */ public void parse() throws FileNotFoundException, IOException, SAXException, TikaException { init(); parser.parse(this.stream, this.contentHandler, this.metadata, context); } + /** + * Parse given MBX stream + * @param inStream stream of MBX file + * @throws FileNotFoundException + * @throws IOException + * @throws SAXException + * @throws TikaException + */ public void parse(InputStream inStream) throws FileNotFoundException, IOException, SAXException, TikaException { init(); parser.parseMbox(inStream, this.contentHandler, this.metadata, context); diff --git a/thunderbirdparser/src/org/sleuthkit/autopsy/thunderbirdparser/ThunderbirdMboxFileIngestModule.java b/thunderbirdparser/src/org/sleuthkit/autopsy/thunderbirdparser/ThunderbirdMboxFileIngestModule.java index 926ea686a0..317eaece43 100644 --- a/thunderbirdparser/src/org/sleuthkit/autopsy/thunderbirdparser/ThunderbirdMboxFileIngestModule.java +++ b/thunderbirdparser/src/org/sleuthkit/autopsy/thunderbirdparser/ThunderbirdMboxFileIngestModule.java @@ -18,11 +18,8 @@ */ package org.sleuthkit.autopsy.thunderbirdparser; -import java.io.File; import java.io.FileNotFoundException; import java.io.IOException; -import java.sql.ResultSet; -import java.sql.SQLException; import java.util.ArrayList; import java.util.Collection; import java.util.HashMap; @@ -32,9 +29,8 @@ import java.util.logging.Level; import org.apache.commons.lang.StringEscapeUtils; import org.apache.tika.exception.TikaException; import org.apache.tika.metadata.Metadata; -import org.sleuthkit.autopsy.casemodule.Case; import org.sleuthkit.autopsy.coreutils.Logger; -import org.sleuthkit.autopsy.datamodel.ContentUtils; +import org.sleuthkit.autopsy.coreutils.Version; import org.sleuthkit.autopsy.ingest.IngestModuleAbstractFile; import org.sleuthkit.autopsy.ingest.IngestModuleInit; import org.sleuthkit.autopsy.ingest.IngestServices; @@ -44,9 +40,7 @@ import org.sleuthkit.datamodel.AbstractFile; import org.sleuthkit.datamodel.BlackboardArtifact; import org.sleuthkit.datamodel.BlackboardAttribute; import org.sleuthkit.datamodel.BlackboardAttribute.ATTRIBUTE_TYPE; -import org.sleuthkit.datamodel.Content; import org.sleuthkit.datamodel.ReadContentInputStream; -import org.sleuthkit.datamodel.SleuthkitCase; import org.sleuthkit.datamodel.TskCoreException; import org.sleuthkit.datamodel.TskData; import org.sleuthkit.datamodel.TskException; @@ -61,11 +55,9 @@ public class ThunderbirdMboxFileIngestModule extends IngestModuleAbstractFile { private static final Logger logger = Logger.getLogger(ThunderbirdMboxFileIngestModule.class.getName()); private static ThunderbirdMboxFileIngestModule instance = null; private IngestServices services; - private static int messageId = 0; - private Case currentCase; - private static final String MODULE_NAME = "Thunderbird Parser"; + private static final String MODULE_NAME = "MBox Parser"; private final String hashDBModuleName = "Hash Lookup"; - final public static String MODULE_VERSION = "1.0"; + final public static String MODULE_VERSION = Version.getVersion(); public static synchronized ThunderbirdMboxFileIngestModule getDefault() { if (instance == null) { @@ -76,21 +68,31 @@ public class ThunderbirdMboxFileIngestModule extends IngestModuleAbstractFile { @Override public ProcessResult process(PipelineContextingestContext, AbstractFile abstractFile) { - if (abstractFile.getKnown().equals( - TskData.FileKnown.KNOWN)) { - return ProcessResult.OK; //file is known, stop processing it - } + + // skip known + if (abstractFile.getKnown().equals(TskData.FileKnown.KNOWN)) { + return ProcessResult.OK; + } + + //skip unalloc + if(abstractFile.getType().equals(TskData.TSK_DB_FILES_TYPE_ENUM.UNALLOC_BLOCKS)) { + return ProcessResult.OK; + } + //file has read error, stop processing it + // @@@ I don't really like this + // we don't know if Hash was run or if it had lookup errors IngestModuleAbstractFile.ProcessResult hashDBResult = services.getAbstractFileModuleResult(hashDBModuleName); if (hashDBResult == IngestModuleAbstractFile.ProcessResult.ERROR) { - return ProcessResult.ERROR; //file has read error, stop processing it + return ProcessResult.ERROR; } if (abstractFile.isVirtual()) { return ProcessResult.OK; } + // check its signature boolean isMbox = false; try { byte[] t = new byte[64]; @@ -110,102 +112,83 @@ public class ThunderbirdMboxFileIngestModule extends IngestModuleAbstractFile { logger.log(Level.INFO, "ThunderbirdMboxFileIngestModule: Parsing {0}", abstractFile.getName()); - String mboxName = abstractFile.getName(); - String msfName = mboxName + ".msf"; - //Long mboxId = fsContent.getId(); - String mboxPath = abstractFile.getParentPath(); - Long msfId = 0L; - currentCase = Case.getCurrentCase(); // get the most updated case - SleuthkitCase tskCase = currentCase.getSleuthkitCase(); + String mboxFileName = abstractFile.getName(); + String mboxParentDir = abstractFile.getParentPath(); - try { - ResultSet resultset = tskCase.runQuery("SELECT obj_id FROM tsk_files WHERE parent_path = '" + mboxPath + "' and name = '" + msfName + "'"); - if (!resultset.next()) { - logger.log(Level.WARNING, "Could not find msf file in mbox dir: " + mboxPath + " file: " + msfName); - tskCase.closeRunQuery(resultset); - return ProcessResult.OK; - } else { - msfId = resultset.getLong(1); - tskCase.closeRunQuery(resultset); - } - - } catch (SQLException ex) { - logger.log(Level.WARNING, "Could not find msf file in mbox dir: " + mboxPath + " file: " + msfName); - } - - try { - Content msfContent = tskCase.getContentById(msfId); - if (msfContent != null) { - ContentUtils.writeToFile(msfContent, new File(currentCase.getTempDirectory() + File.separator + msfName)); - } - } catch (IOException ex) { - logger.log(Level.WARNING, "Unable to obtain msf file for mbox parsing:" + msfName, ex); - } catch (TskCoreException ex) { - logger.log(Level.WARNING, "Unable to obtain msf file for mbox parsing:" + msfName, ex); - } - int index = 0; - String replace = ""; - boolean a = mboxPath.indexOf("/ImapMail/") > 0; - boolean b = mboxPath.indexOf("/Mail/") > 0; - if (b == true) { - index = mboxPath.indexOf("/Mail/"); - replace = "/Mail"; - } else if (a == true) { - index = mboxPath.indexOf("/ImapMail/"); - replace = "/ImapMail"; - } else { - replace = ""; - - } - - String folderPath = mboxPath.substring(index); - folderPath = folderPath.replaceAll(replace, ""); - folderPath = folderPath + mboxName; - folderPath = folderPath.replaceAll(".sbd", ""); -// Reader reader = null; -// try { -// reader = new FileReader(currentCase.getTempDirectory() + File.separator + msfName); -// } catch (FileNotFoundException ex) { -// Logger.getLogger(ThunderbirdMboxFileIngestModule.class.getName()).log(Level.WARNING, null, ex); + // Find the .msf file in the same folder + // BC: Commented out because results are not being used Oct '13 + //Long msfId = 0L; + //String msfName = mboxFileName + ".msf"; + //SleuthkitCase tskCase = currentCase.getSleuthkitCase(); + // @@@ We shouldn't bail out here if we dont' find it... +// try { +// // @@@ Replace this with a call to FileManager.findFiles() +// ResultSet resultset = tskCase.runQuery("SELECT obj_id FROM tsk_files WHERE parent_path = '" + mboxParentDir + "' and name = '" + msfName + "'"); +// if (!resultset.next()) { +// logger.log(Level.WARNING, "Could not find msf file in mbox dir: " + mboxParentDir + " file: " + msfName); +// tskCase.closeRunQuery(resultset); +// return ProcessResult.OK; +// } else { +// msfId = resultset.getLong(1); +// tskCase.closeRunQuery(resultset); // } -// MorkDocument morkDocument = new MorkDocument(reader); -// List dicts = morkDocument.getDicts(); -// for(Dict dict : dicts){ -// String path = dict.getValue("81").toString(); -// String account = dict.getValue("8D").toString(); -// } - String emailId = ""; - String content = ""; - String from = ""; - String to = ""; - String stringDate = ""; - Long date = 0L; - String subject = ""; - String cc = ""; - String bcc = ""; - ThunderbirdEmailParser mbox = new ThunderbirdEmailParser(); +// +// } catch (SQLException ex) { +// logger.log(Level.WARNING, "Could not find msf file in mbox dir: " + mboxParentDir + " file: " + msfName); +// } +// +// try { +// Content msfContent = tskCase.getContentById(msfId); +// if (msfContent != null) { +// ContentUtils.writeToFile(msfContent, new File(currentCase.getTempDirectory() + File.separator + msfName)); +// } +// } catch (IOException ex) { +// logger.log(Level.WARNING, "Unable to obtain msf file for mbox parsing:" + msfName, ex); +// } catch (TskCoreException ex) { +// logger.log(Level.WARNING, "Unable to obtain msf file for mbox parsing:" + msfName, ex); +// } + + + // use the local path to determine the e-mail folder structure + String emailFolder = ""; + // email folder is everything after "Mail" or ImapMail + if (mboxParentDir.contains("/Mail/")) { + emailFolder = mboxParentDir.substring(mboxParentDir.indexOf("/Mail/") + 5); + } + else if (mboxParentDir.contains("/ImapMail/")) { + emailFolder = mboxParentDir.substring(mboxParentDir.indexOf("/ImapMail/") + 9); + } + emailFolder = emailFolder + mboxFileName; + emailFolder = emailFolder.replaceAll(".sbd", ""); + + boolean errorsFound = false; try { ReadContentInputStream contentStream = new ReadContentInputStream(abstractFile); + ThunderbirdEmailParser mbox = new ThunderbirdEmailParser(); mbox.parse(contentStream); - HashMap> emailMap = new HashMap>(); - emailMap = mbox.getAllEmails(); + + HashMap>emailMap = mbox.getAllEmails(); for (Entry> entry : emailMap.entrySet()) { - Map propertyMap = new HashMap(); - emailId = ((entry.getKey() != null) ? entry.getKey() : "Not Available"); - propertyMap = entry.getValue(); - content = ((propertyMap.get("content") != null) ? propertyMap.get("content") : ""); - from = ((propertyMap.get(Metadata.AUTHOR) != null) ? propertyMap.get(Metadata.AUTHOR) : ""); - to = ((propertyMap.get(Metadata.MESSAGE_TO) != null) ? propertyMap.get(Metadata.MESSAGE_TO) : ""); - stringDate = ((propertyMap.get("date") != null) ? propertyMap.get("date") : ""); - if (!"".equals(stringDate)) { + /* @@@ I'd rather this code be cleaned up a bit so that we check if the value is + * set and then directly add it to the attribute. otherwise, we end up with a bunch + * of "" attribute values. + */ + Collection bbattributes = new ArrayList<>(); + String emailId = ((entry.getKey() != null) ? entry.getKey() : "Not Available"); + MappropertyMap = entry.getValue(); + String content = ((propertyMap.get("content") != null) ? propertyMap.get("content") : ""); + String from = ((propertyMap.get(Metadata.AUTHOR) != null) ? propertyMap.get(Metadata.AUTHOR) : ""); + String to = ((propertyMap.get(Metadata.MESSAGE_TO) != null) ? propertyMap.get(Metadata.MESSAGE_TO) : ""); + String stringDate = ((propertyMap.get("date") != null) ? propertyMap.get("date") : ""); + Long date = 0L; + if (stringDate.equals("") == false) { date = mbox.getDateCreated(stringDate); } - subject = ((propertyMap.get(Metadata.SUBJECT) != null) ? propertyMap.get(Metadata.SUBJECT) : ""); - cc = ((propertyMap.get(Metadata.MESSAGE_CC) != null) ? propertyMap.get(Metadata.MESSAGE_CC) : ""); - bcc = ((propertyMap.get(Metadata.MESSAGE_BCC) != null) ? propertyMap.get(Metadata.MESSAGE_BCC) : ""); - - Collection bbattributes = new ArrayList(); + String subject = ((propertyMap.get(Metadata.SUBJECT) != null) ? propertyMap.get(Metadata.SUBJECT) : ""); + String cc = ((propertyMap.get(Metadata.MESSAGE_CC) != null) ? propertyMap.get(Metadata.MESSAGE_CC) : ""); + String bcc = ((propertyMap.get(Metadata.MESSAGE_BCC) != null) ? propertyMap.get(Metadata.MESSAGE_BCC) : ""); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_EMAIL_TO.getTypeID(), MODULE_NAME, to)); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_EMAIL_CC.getTypeID(), MODULE_NAME, cc)); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_EMAIL_BCC.getTypeID(), MODULE_NAME, bcc)); @@ -217,22 +200,33 @@ public class ThunderbirdMboxFileIngestModule extends IngestModuleAbstractFile { bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_RCVD.getTypeID(), MODULE_NAME, date)); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_DATETIME_SENT.getTypeID(), MODULE_NAME, date)); bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_SUBJECT.getTypeID(), MODULE_NAME, subject)); - bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PATH.getTypeID(), MODULE_NAME, folderPath)); + bbattributes.add(new BlackboardAttribute(ATTRIBUTE_TYPE.TSK_PATH.getTypeID(), MODULE_NAME, emailFolder)); BlackboardArtifact bbart; try { bbart = abstractFile.newArtifact(BlackboardArtifact.ARTIFACT_TYPE.TSK_EMAIL_MSG); bbart.addAttributes(bbattributes); + services.fireModuleDataEvent(new ModuleDataEvent(MODULE_NAME, BlackboardArtifact.ARTIFACT_TYPE.TSK_EMAIL_MSG)); } catch (TskCoreException ex) { Logger.getLogger(ThunderbirdMboxFileIngestModule.class.getName()).log(Level.WARNING, null, ex); + errorsFound = true; } - services.fireModuleDataEvent(new ModuleDataEvent(MODULE_NAME, BlackboardArtifact.ARTIFACT_TYPE.TSK_EMAIL_MSG)); } - } catch (FileNotFoundException ex) { + } + catch (FileNotFoundException ex) { Logger.getLogger(ThunderbirdMboxFileIngestModule.class.getName()).log(Level.WARNING, null, ex); - } catch (IOException ex) { + errorsFound = true; + } + catch (IOException ex) { Logger.getLogger(ThunderbirdMboxFileIngestModule.class.getName()).log(Level.WARNING, null, ex); - } catch (SAXException | TikaException ex) { + errorsFound = true; + } + catch (SAXException | TikaException ex) { Logger.getLogger(ThunderbirdMboxFileIngestModule.class.getName()).log(Level.WARNING, null, ex); + errorsFound = true; + } + if (errorsFound) { + // @@@ RECORD THEM... + return ProcessResult.ERROR; } return ProcessResult.OK; @@ -240,9 +234,6 @@ public class ThunderbirdMboxFileIngestModule extends IngestModuleAbstractFile { @Override public void complete() { - logger.log(Level.INFO, "complete()"); - - //module specific cleanup due completion here } @Override @@ -263,18 +254,11 @@ public class ThunderbirdMboxFileIngestModule extends IngestModuleAbstractFile { @Override public void init(IngestModuleInit initContext) { - logger.log(Level.INFO, "init()"); services = IngestServices.getDefault(); - - currentCase = Case.getCurrentCase(); - //module specific initialization here } @Override public void stop() { - logger.log(Level.INFO, "stop()"); - - //module specific cleanup due interruption here } @Override diff --git a/thunderbirdparser/src/org/sleuthkit/autopsy/thunderbirdparser/ThunderbirdMboxParser.java b/thunderbirdparser/src/org/sleuthkit/autopsy/thunderbirdparser/ThunderbirdMboxParser.java index 60b9f75ca6..9ccb22b6a7 100644 --- a/thunderbirdparser/src/org/sleuthkit/autopsy/thunderbirdparser/ThunderbirdMboxParser.java +++ b/thunderbirdparser/src/org/sleuthkit/autopsy/thunderbirdparser/ThunderbirdMboxParser.java @@ -37,6 +37,9 @@ import org.apache.tika.sax.BodyContentHandler; import org.xml.sax.ContentHandler; import org.xml.sax.SAXException; +/** + * Contains the logic to parse an MBOX file. + */ public class ThunderbirdMboxParser { /** Serial version UID */ @@ -242,8 +245,8 @@ public class ThunderbirdMboxParser { } metadata.add(property, headerContent); } else if (headerTag.equalsIgnoreCase("Subject")) { - metadata.add(ThunderbirdMetadata.SUBJECT.toString(), headerContent); - metadata.add(ThunderbirdMetadata.TITLE.toString(), headerContent); + metadata.set(ThunderbirdMetadata.SUBJECT, headerContent); + metadata.set(ThunderbirdMetadata.TITLE, headerContent); } else if (headerTag.equalsIgnoreCase("Date")) { try { Date date = parseDate(headerContent); @@ -440,8 +443,8 @@ public class ThunderbirdMboxParser { emailMetaContent.put(Metadata.MESSAGE_BCC, metadata.get(Metadata.MESSAGE_BCC)); emailMetaContent.put(Metadata.AUTHOR, metadata.get(Metadata.AUTHOR)); emailMetaContent.put("content", emailContent); - emailMetaContent.put("date", metadata.get("date")); - emailMetaContent.put(Metadata.SUBJECT, metadata.get(Metadata.SUBJECT)); + emailMetaContent.put("date", metadata.get(ThunderbirdMetadata.DATE)); + emailMetaContent.put(Metadata.SUBJECT, metadata.get(ThunderbirdMetadata.SUBJECT)); if(metadata.get(ThunderbirdMetadata.IDENTIFIER) == null){ Random r = new Random(); this.emails.put(metadata.get(Metadata.AUTHOR)+Long.toString(Math.abs(r.nextLong()), 36), emailMetaContent); diff --git a/update_versions.py b/update_versions.py index 2883021c9f..fa228d0cca 100644 --- a/update_versions.py +++ b/update_versions.py @@ -1,939 +1,939 @@ -# -# Autopsy Forensic Browser -# -# Copyright 2012-2013 Basis Technology Corp. -# Contact: carrier sleuthkit org -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -####################### -# This script exists to help us determine update the library -# versions appropriately. See this page for version details. -# -# http://wiki.sleuthkit.org/index.php?title=Autopsy_3_Module_Versions -# -# The basic idea is that this script uses javadoc/jdiff to -# compare the current state of the source code to the last -# tag and identifies if APIs were removed, added, etc. -# -# When run from the Autopsy build script, this script will: -# - Clone Autopsy and checkout to the previous release tag -# as found in the NEWS.txt file -# - Auto-discover all modules and packages -# - Run jdiff, comparing the current and previous modules -# - Use jdiff's output to determine if each module -# a) has no changes -# b) has backwards compatible changes -# c) has backwards incompatible changes -# - Based off it's compatibility, updates each module's -# a) Major version -# b) Specification version -# c) Implementation version -# - Updates the dependencies on each module depending on the -# updated version numbers -# -# Optionally, when run from the command line, one can provide the -# desired tag to compare the current version to, the directory for -# the current version of Autopsy, and whether to automatically -# update the version numbers and dependencies. -# ------------------------------------------------------------ - -import errno -import os -import shutil -import stat -import subprocess -import sys -import traceback -from os import remove, close -from shutil import move -from tempfile import mkstemp -from xml.dom.minidom import parse, parseString - -# Jdiff return codes. Described in more detail further on -NO_CHANGES = 100 -COMPATIBLE = 101 -NON_COMPATIBLE = 102 -ERROR = 1 - -# An Autopsy module object -class Module: - # Initialize it with a name, return code, and version numbers - def __init__(self, name=None, ret=None, versions=None): - self.name = name - self.ret = ret - self.versions = versions - # As a string, the module should be it's name - def __str__(self): - return self.name - def __repr__(self): - return self.name - # When compared to another module, the two are equal if the names are the same - def __cmp__(self, other): - if isinstance(other, Module): - if self.name == other.name: - return 0 - elif self.name < other.name: - return -1 - else: - return 1 - return 1 - def __eq__(self, other): - if isinstance(other, Module): - if self.name == other.name: - return True - return False - def set_name(self, name): - self.name = name - def set_ret(self, ret): - self.ret = ret - def set_versions(self, versions): - self.versions = versions - def spec(self): - return self.versions[0] - def impl(self): - return self.versions[1] - def release(self): - return self.versions[2] - -# Representation of the Specification version number -class Spec: - # Initialize specification number, where num is a string like x.y - def __init__(self, num): - self.third = None - spec_nums = num.split(".") - if len(spec_nums) == 3: - final = spec_nums[2] - self.third = int(final) - - l, r = spec_nums[0], spec_nums[1] - - self.left = int(l) - self.right = int(r) - - def __str__(self): - return self.get() - def __cmp__(self, other): - if isinstance(other, Spec): - if self.left == other.left: - if self.right == other.right: - return 0 - if self.right < other.right: - return -1 - return 1 - if self.left < other.left: - return -1 - return 1 - elif isinstance(other, str): - l, r = other.split(".") - if self.left == int(l): - if self.right == int(r): - return 0 - if self.right < int(r): - return -1 - return 1 - if self.left < int(l): - return -1 - return 1 - return -1 - - def overflow(self): - return str(self.left + 1) + ".0" - def increment(self): - return str(self.left) + "." + str(self.right + 1) - def get(self): - spec_str = str(self.left) + "." + str(self.right) - if self.third is not None: - spec_str += "." + str(self.final) - return spec_str - def set(self, num): - if isinstance(num, str): - l, r = num.split(".") - self.left = int(l) - self.right = int(r) - elif isinstance(num, Spec): - self.left = num.left - self.right = num.right - return self - -# ================================ # -# Core Functions # -# ================================ # - -# Given a list of modules and the names for each version, compare -# the generated jdiff XML for each module and output the jdiff -# JavaDocs. -# -# modules: the list of all modules both versions have in common -# apiname_tag: the api name of the previous version, most likely the tag -# apiname_cur: the api name of the current version, most likely "Current" -# -# returns the exit code from the modified jdiff.jar -# return code 1 = error in jdiff -# return code 100 = no changes -# return code 101 = compatible changes -# return code 102 = incompatible changes -def compare_xml(module, apiname_tag, apiname_cur): - global docdir - make_dir(docdir) - null_file = fix_path(os.path.abspath("./thirdparty/jdiff/v-custom/lib/Null.java")) - jdiff = fix_path(os.path.abspath("./thirdparty/jdiff/v-custom/jdiff.jar")) - oldapi = fix_path("build/jdiff-xml/" + apiname_tag + "-" + module.name) - newapi = fix_path("build/jdiff-xml/" + apiname_cur + "-" + module.name) - docs = fix_path(docdir + "/" + module.name) - # Comments are strange. They look for a file with additional user comments in a - # directory like docs/user_comments_for_xyz. The problem being that xyz is the - # path to the new/old api. So xyz turns into multiple directories for us. - # i.e. user_comments_for_build/jdiff-xml/[tag name]-[module name]_to_build/jdiff-xml - comments = fix_path(docs + "/user_comments_for_build") - jdiff_com = fix_path(comments + "/jdiff-xml") - tag_comments = fix_path(jdiff_com + "/" + apiname_tag + "-" + module.name + "_to_build") - jdiff_tag_com = fix_path(tag_comments + "/jdiff-xml") - - if not os.path.exists(jdiff): - print("JDIFF doesn't exist.") - - make_dir(docs) - make_dir(comments) - make_dir(jdiff_com) - make_dir(tag_comments) - make_dir(jdiff_tag_com) - make_dir("jdiff-logs") - log = open("jdiff-logs/COMPARE-" + module.name + ".log", "w") - cmd = ["javadoc", - "-doclet", "jdiff.JDiff", - "-docletpath", jdiff, - "-d", docs, - "-oldapi", oldapi, - "-newapi", newapi, - "-script", - null_file] - jdiff = subprocess.Popen(cmd, stdout=log, stderr=log) - jdiff.wait() - log.close() - code = jdiff.returncode - print("Compared XML for " + module.name) - if code == NO_CHANGES: - print(" No API changes") - elif code == COMPATIBLE: - print(" API Changes are backwards compatible") - elif code == NON_COMPATIBLE: - print(" API Changes are not backwards compatible") - else: - print(" *Error in XML, most likely an empty module") - sys.stdout.flush() - return code - -# Generate the jdiff xml for the given module -# path: path to the autopsy source -# module: Module object -# name: api name for jdiff -def gen_xml(path, modules, name): - for module in modules: - # If its the regression test, the source is in the "test" dir - if module.name == "Testing": - src = os.path.join(path, module.name, "test", "qa-functional", "src") - else: - src = os.path.join(path, module.name, "src") - # xerces = os.path.abspath("./lib/xerces.jar") - xml_out = fix_path(os.path.abspath("./build/jdiff-xml/" + name + "-" + module.name)) - jdiff = fix_path(os.path.abspath("./thirdparty/jdiff/v-custom/jdiff.jar")) - make_dir("build/jdiff-xml") - make_dir("jdiff-logs") - log = open("jdiff-logs/GEN_XML-" + name + "-" + module.name + ".log", "w") - cmd = ["javadoc", - "-doclet", "jdiff.JDiff", - "-docletpath", jdiff, # ;" + xerces, <-- previous problems required this - "-apiname", xml_out, # leaving it in just in case it's needed once again - "-sourcepath", fix_path(src)] - cmd = cmd + get_packages(src) - jdiff = subprocess.Popen(cmd, stdout=log, stderr=log) - jdiff.wait() - log.close() - print("Generated XML for " + name + " " + module.name) - sys.stdout.flush() - -# Find all the modules in the given path -def find_modules(path): - modules = [] - # Step into each folder in the given path and - # see if it has manifest.mf - if so, it's a module - for dir in os.listdir(path): - directory = os.path.join(path, dir) - if os.path.isdir(directory): - for file in os.listdir(directory): - if file == "manifest.mf": - modules.append(Module(dir, None, None)) - return modules - -# Detects the differences between the source and tag modules -def module_diff(source_modules, tag_modules): - added_modules = [x for x in source_modules if x not in tag_modules] - removed_modules = [x for x in tag_modules if x not in source_modules] - similar_modules = [x for x in source_modules if x in tag_modules] - - added_modules = (added_modules if added_modules else []) - removed_modules = (removed_modules if removed_modules else []) - similar_modules = (similar_modules if similar_modules else []) - return similar_modules, added_modules, removed_modules - -# Reads the previous tag from NEWS.txt -def get_tag(sourcepath): - news = open(sourcepath + "/NEWS.txt", "r") - second_instance = False - for line in news: - if "----------------" in line: - if second_instance: - ver = line.split("VERSION ")[1] - ver = ver.split(" -")[0] - return ("autopsy-" + ver).strip() - else: - second_instance = True - continue - news.close() - - -# ========================================== # -# Dependency Functions # -# ========================================== # - -# Write a new XML file, copying all the lines from projectxml -# and replacing the specification version for the code-name-base base -# with the supplied specification version spec -def set_dep_spec(projectxml, base, spec): - print(" Updating Specification version..") - orig = open(projectxml, "r") - f, abs_path = mkstemp() - new_file = open(abs_path, "w") - found_base = False - spacing = " " - sopen = "" - sclose = "\n" - for line in orig: - if base in line: - found_base = True - if found_base and sopen in line: - update = spacing + sopen + str(spec) + sclose - new_file.write(update) - else: - new_file.write(line) - new_file.close() - close(f) - orig.close() - remove(projectxml) - move(abs_path, projectxml) - -# Write a new XML file, copying all the lines from projectxml -# and replacing the release version for the code-name-base base -# with the supplied release version -def set_dep_release(projectxml, base, release): - print(" Updating Release version..") - orig = open(projectxml, "r") - f, abs_path = mkstemp() - new_file = open(abs_path, "w") - found_base = False - spacing = " " - ropen = "" - rclose = "\n" - for line in orig: - if base in line: - found_base = True - if found_base and ropen in line: - update = spacing + ropen + str(release) + rclose - new_file.write(update) - else: - new_file.write(line) - new_file.close() - close(f) - orig.close() - remove(projectxml) - move(abs_path, projectxml) - -# Return the dependency versions in the XML dependency node -def get_dep_versions(dep): - run_dependency = dep.getElementsByTagName("run-dependency")[0] - release_version = run_dependency.getElementsByTagName("release-version") - if release_version: - release_version = getTagText(release_version[0].childNodes) - specification_version = run_dependency.getElementsByTagName("specification-version") - if specification_version: - specification_version = getTagText(specification_version[0].childNodes) - return int(release_version), Spec(specification_version) - -# Given a code-name-base, see if it corresponds with any of our modules -def get_module_from_base(modules, code_name_base): - for module in modules: - if "org.sleuthkit.autopsy." + module.name.lower() == code_name_base: - return module - return None # If it didn't match one of our modules - -# Check the text between two XML tags -def getTagText(nodelist): - for node in nodelist: - if node.nodeType == node.TEXT_NODE: - return node.data - -# Check the projectxml for a dependency on any module in modules -def check_for_dependencies(projectxml, modules): - dom = parse(projectxml) - dep_list = dom.getElementsByTagName("dependency") - for dep in dep_list: - code_name_base = dep.getElementsByTagName("code-name-base")[0] - code_name_base = getTagText(code_name_base.childNodes) - module = get_module_from_base(modules, code_name_base) - if module: - print(" Found dependency on " + module.name) - release, spec = get_dep_versions(dep) - if release != module.release() and module.release() is not None: - set_dep_release(projectxml, code_name_base, module.release()) - else: print(" Release version is correct") - if spec != module.spec() and module.spec() is not None: - set_dep_spec(projectxml, code_name_base, module.spec()) - else: print(" Specification version is correct") - -# Given the module and the source directory, return -# the paths to the manifest and project properties files -def get_dependency_file(module, source): - projectxml = os.path.join(source, module.name, "nbproject", "project.xml") - if os.path.isfile(projectxml): - return projectxml - -# Verify/Update the dependencies for each module, basing the dependency -# version number off the versions in each module -def update_dependencies(modules, source): - for module in modules: - print("Checking the dependencies for " + module.name + "...") - projectxml = get_dependency_file(module, source) - if projectxml == None: - print(" Error finding project xml file") - else: - other = [x for x in modules] - check_for_dependencies(projectxml, other) - sys.stdout.flush() - -# ======================================== # -# Versioning Functions # -# ======================================== # - -# Return the specification version in the given project.properties/manifest.mf file -def get_specification(project, manifest): - try: - # Try to find it in the project file - # it will be there if impl version is set to append automatically - f = open(project, 'r') - for line in f: - if "spec.version.base" in line: - return Spec(line.split("=")[1].strip()) - f.close() - # If not found there, try the manifest file - f = open(manifest, 'r') - for line in f: - if "OpenIDE-Module-Specification-Version:" in line: - return Spec(line.split(": ")[1].strip()) - except Exception as e: - print("Error parsing Specification version for") - print(project) - print(e) - -# Set the specification version in the given project properties file -# but if it can't be found there, set it in the manifest file -def set_specification(project, manifest, num): - try: - # First try the project file - f = open(project, 'r') - for line in f: - if "spec.version.base" in line: - f.close() - replace(project, line, "spec.version.base=" + str(num) + "\n") - return - f.close() - # If it's not there, try the manifest file - f = open(manifest, 'r') - for line in f: - if "OpenIDE-Module-Specification-Version:" in line: - f.close() - replace(manifest, line, "OpenIDE-Module-Specification-Version: " + str(num) + "\n") - return - # Otherwise we're out of luck - print(" Error finding the Specification version to update") - print(" " + manifest) - f.close() - except: - print(" Error incrementing Specification version for") - print(" " + project) - -# Return the implementation version in the given manifest.mf file -def get_implementation(manifest): - try: - f = open(manifest, 'r') - for line in f: - if "OpenIDE-Module-Implementation-Version" in line: - return int(line.split(": ")[1].strip()) - f.close() - except: - print("Error parsing Implementation version for") - print(manifest) - -# Set the implementation version in the given manifest file -def set_implementation(manifest, num): - try: - f = open(manifest, 'r') - for line in f: - if "OpenIDE-Module-Implementation-Version" in line: - f.close() - replace(manifest, line, "OpenIDE-Module-Implementation-Version: " + str(num) + "\n") - return - # If it isn't there, add it - f.close() - write_implementation(manifest, num) - except: - print(" Error incrementing Implementation version for") - print(" " + manifest) - -# Rewrite the manifest file to include the implementation version -def write_implementation(manifest, num): - f = open(manifest, "r") - contents = f.read() - contents = contents[:-2] + "OpenIDE-Module-Implementation-Version: " + str(num) + "\n\n" - f.close() - f = open(manifest, "w") - f.write(contents) - f.close() - -# Return the release version in the given manifest.mf file -def get_release(manifest): - try: - f = open(manifest, 'r') - for line in f: - if "OpenIDE-Module:" in line: - return int(line.split("/")[1].strip()) - f.close() - except: - #print("Error parsing Release version for") - #print(manifest) - return 0 - -# Set the release version in the given manifest file -def set_release(manifest, num): - try: - f = open(manifest, 'r') - for line in f: - if "OpenIDE-Module:" in line: - f.close() - index = line.index('/') - len(line) + 1 - newline = line[:index] + str(num) - replace(manifest, line, newline + "\n") - return - print(" Error finding the release version to update") - print(" " + manifest) - f.close() - except: - print(" Error incrementing release version for") - print(" " + manifest) - -# Given the module and the source directory, return -# the paths to the manifest and project properties files -def get_version_files(module, source): - manifest = os.path.join(source, module.name, "manifest.mf") - project = os.path.join(source, module.name, "nbproject", "project.properties") - if os.path.isfile(manifest) and os.path.isfile(project): - return manifest, project - -# Returns a the current version numbers for the module in source -def get_versions(module, source): - manifest, project = get_version_files(module, source) - if manifest == None or project == None: - print(" Error finding manifeset and project properties files") - return - spec = get_specification(project, manifest) - impl = get_implementation(manifest) - release = get_release(manifest) - return [spec, impl, release] - -# Update the version numbers for every module in modules -def update_versions(modules, source): - for module in modules: - versions = module.versions - manifest, project = get_version_files(module, source) - print("Updating " + module.name + "...") - if manifest == None or project == None: - print(" Error finding manifeset and project properties files") - return - if module.ret == COMPATIBLE: - versions = [versions[0].set(versions[0].increment()), versions[1] + 1, versions[2]] - set_specification(project, manifest, versions[0]) - set_implementation(manifest, versions[1]) - module.set_versions(versions) - elif module.ret == NON_COMPATIBLE: - versions = [versions[0].set(versions[0].overflow()), versions[1] + 1, versions[2] + 1] - set_specification(project, manifest, versions[0]) - set_implementation(manifest, versions[1]) - set_release(manifest, versions[2]) - module.set_versions(versions) - elif module.ret == NO_CHANGES: - versions = [versions[0], versions[1] + 1, versions[2]] - set_implementation(manifest, versions[1]) - module.set_versions(versions) - elif module.ret == None: - versions = [Spec("1.0"), 1, 1] - set_specification(project, manifest, versions[0]) - set_implementation(manifest, versions[1]) - set_release(manifest, versions[2]) - module.set_versions(versions) - sys.stdout.flush() - -# Given a list of the added modules, remove the modules -# which have the correct 'new module default' version number -def remove_correct_added(modules): - correct = [x for x in modules] - for module in modules: - if module.spec() == "1.0" or module.spec() == "0.0": - if module.impl() == 1: - if module.release() == 1 or module.release() == 0: - correct.remove(module) - return correct - -# ==================================== # -# Helper Functions # -# ==================================== # - -# Replace pattern with subst in given file -def replace(file, pattern, subst): - #Create temp file - fh, abs_path = mkstemp() - new_file = open(abs_path,'w') - old_file = open(file) - for line in old_file: - new_file.write(line.replace(pattern, subst)) - #close temp file - new_file.close() - close(fh) - old_file.close() - #Remove original file - remove(file) - #Move new file - move(abs_path, file) - -# Given a list of modules print the version numbers that need changing -def print_version_updates(modules): - f = open("gen_version.txt", "a") - for module in modules: - versions = module.versions - if module.ret == COMPATIBLE: - output = (module.name + ":\n") - output += ("\tSpecification:\t" + str(versions[0]) + "\t->\t" + str(versions[0].increment()) + "\n") - output += ("\tImplementation:\t" + str(versions[1]) + "\t->\t" + str(versions[1] + 1) + "\n") - output += ("\tRelease:\tNo Change.\n") - output += ("\n") - print(output) - sys.stdout.flush() - f.write(output) - elif module.ret == NON_COMPATIBLE: - output = (module.name + ":\n") - output += ("\tSpecification:\t" + str(versions[0]) + "\t->\t" + str(versions[0].overflow()) + "\n") - output += ("\tImplementation:\t" + str(versions[1]) + "\t->\t" + str(versions[1] + 1) + "\n") - output += ("\tRelease:\t" + str(versions[2]) + "\t->\t" + str(versions[2] + 1) + "\n") - output += ("\n") - print(output) - sys.stdout.flush() - f.write(output) - elif module.ret == ERROR: - output = (module.name + ":\n") - output += ("\t*Unable to detect necessary changes\n") - output += ("\tSpecification:\t" + str(versions[0]) + "\n") - output += ("\tImplementation:\t" + str(versions[1]) + "\n") - output += ("\tRelease:\t\t" + str(versions[2]) + "\n") - output += ("\n") - print(output) - f.write(output) - sys.stdout.flush() - elif module.ret == NO_CHANGES: - output = (module.name + ":\n") - if versions[1] is None: - output += ("\tImplementation: None\n") - else: - output += ("\tImplementation:\t" + str(versions[1]) + "\t->\t" + str(versions[1] + 1) + "\n") - output += ("\n") - print(output) - sys.stdout.flush() - f.write(output) - elif module.ret is None: - output = ("Added " + module.name + ":\n") - if module.spec() != "1.0" and module.spec() != "0.0": - output += ("\tSpecification:\t" + str(module.spec()) + "\t->\t" + "1.0\n") - output += ("\n") - if module.impl() != 1: - output += ("\tImplementation:\t" + str(module.impl()) + "\t->\t" + "1\n") - output += ("\n") - if module.release() != 1 and module.release() != 0: - output += ("Release:\t\t" + str(module.release()) + "\t->\t" + "1\n") - output += ("\n") - print(output) - sys.stdout.flush() - f.write(output) - sys.stdout.flush() - f.close() - -# Changes cygwin paths to Windows -def fix_path(path): - if "cygdrive" in path: - new_path = path[11:] - return "C:/" + new_path - else: - return path - -# Print a 'title' -def printt(title): - print("\n" + title) - lines = "" - for letter in title: - lines += "-" - print(lines) - sys.stdout.flush() - -# Get a list of package names in the given path -# The path is expected to be of the form {base}/module/src -# -# NOTE: We currently only check for packages of the form -# org.sleuthkit.autopsy.x -# If we add other namespaces for commercial modules we will -# have to add a check here -def get_packages(path): - packages = [] - package_path = os.path.join(path, "org", "sleuthkit", "autopsy") - for folder in os.listdir(package_path): - package_string = "org.sleuthkit.autopsy." - packages.append(package_string + folder) - return packages - -# Create the given directory, if it doesn't already exist -def make_dir(dir): - try: - if not os.path.isdir(dir): - os.mkdir(dir) - if os.path.isdir(dir): - return True - return False - except: - print("Exception thrown when creating directory") - return False - -# Delete the given directory, and make sure it is deleted -def del_dir(dir): - try: - if os.path.isdir(dir): - shutil.rmtree(dir, ignore_errors=False, onerror=handleRemoveReadonly) - if os.path.isdir(dir): - return False - else: - return True - return True - except: - print("Exception thrown when deleting directory") - traceback.print_exc() - return False - -# Handle any permisson errors thrown by shutil.rmtree -def handleRemoveReadonly(func, path, exc): - excvalue = exc[1] - if func in (os.rmdir, os.remove) and excvalue.errno == errno.EACCES: - os.chmod(path, stat.S_IRWXU| stat.S_IRWXG| stat.S_IRWXO) # 0777 - func(path) - else: - raise - -# Run git clone and git checkout for the tag -def do_git(tag, tag_dir): - try: - printt("Cloning Autopsy tag " + tag + " into dir " + tag_dir + " (this could take a while)...") - subprocess.call(["git", "clone", "https://github.com/sleuthkit/autopsy.git", tag_dir], - stdout=subprocess.PIPE) - printt("Checking out tag " + tag + "...") - subprocess.call(["git", "checkout", tag], - stdout=subprocess.PIPE, - cwd=tag_dir) - return True - except Exception as ex: - print("Error cloning and checking out Autopsy: ", sys.exc_info()[0]) - print(str(ex)) - print("The terminal you are using most likely does not recognize git commands.") - return False - -# Get the flags from argv -def args(): - try: - sys.argv.pop(0) - while sys.argv: - arg = sys.argv.pop(0) - if arg == "-h" or arg == "--help": - return 1 - elif arg == "-t" or arg == "--tag": - global tag - tag = sys.argv.pop(0) - elif arg == "-s" or arg == "--source": - global source - source = sys.argv.pop(0) - elif arg == "-d" or arg == "--dir": - global docdir - docdir = sys.argv.pop(0) - elif arg == "-a" or arg == "--auto": - global dry - dry = False - else: - raise Exception() - except: - pass - -# Print script run info -def printinfo(): - global tag - global source - global docdir - global dry - printt("Release script information:") - if source is None: - source = fix_path(os.path.abspath(".")) - print("Using source directory:\n " + source) - if tag is None: - tag = get_tag(source) - print("Checking out to tag:\n " + tag) - if docdir is None: - docdir = fix_path(os.path.abspath("./jdiff-javadocs")) - print("Generating jdiff JavaDocs in:\n " + docdir) - if dry is True: - print("Dry run: will not auto-update version numbers") - sys.stdout.flush() - -# Print the script's usage/help -def usage(): - return \ - """ - USAGE: - Compares the API of the current Autopsy source code with a previous - tagged version. By default, it will detect the previous tag from - the NEWS file and will not update the versions in the source code. - - OPTIONAL FLAGS: - -t --tag Specify a previous tag to compare to. - Otherwise the NEWS file will be used. - - -d --dir The output directory for the jdiff JavaDocs. If no - directory is given, the default is jdiff-javadocs/{module}. - - -s --source The directory containing Autopsy's source code. - - -a --auto Automatically update version numbers (not dry). - - -h --help Prints this usage. - """ - -# ==================================== # -# Main Functionality # -# ==================================== # - -# Where the magic happens -def main(): - global tag; global source; global docdir; global dry - tag = None; source = None; docdir = None; dry = True - - ret = args() - if ret: - print(usage()) - return 0 - printinfo() - - # ----------------------------------------------- - # 1) Clone Autopsy, checkout to given tag/commit - # 2) Get the modules in the clone and the source - # 3) Generate the xml comparison - # ----------------------------------------------- - if not del_dir("./build/" + tag): - print("\n\n=========================================") - print(" Failed to delete previous Autopsy clone.") - print(" Unable to continue...") - print("=========================================") - return 1 - tag_dir = os.path.abspath("./build/" + tag) - if not do_git(tag, tag_dir): - return 1 - sys.stdout.flush() - - tag_modules = find_modules(tag_dir) - source_modules = find_modules(source) - - printt("Generating jdiff XML reports...") - apiname_tag = tag - apiname_cur = "current" - gen_xml(tag_dir, tag_modules, apiname_tag) - gen_xml(source, source_modules, apiname_cur) - - printt("Deleting cloned Autopsy directory...") - print("Clone successfully deleted" if del_dir(tag_dir) else "Failed to delete clone") - sys.stdout.flush() - - # ----------------------------------------------------- - # 1) Seperate modules into added, similar, and removed - # 2) Compare XML for each module - # ----------------------------------------------------- - printt("Comparing modules found...") - similar_modules, added_modules, removed_modules = module_diff(source_modules, tag_modules) - if added_modules or removed_modules: - for m in added_modules: - print("+ Added " + m.name) - sys.stdout.flush() - for m in removed_modules: - print("- Removed " + m.name) - sys.stdout.flush() - else: - print("No added or removed modules") - sys.stdout.flush() - - printt("Comparing jdiff outputs...") - for module in similar_modules: - module.set_ret(compare_xml(module, apiname_tag, apiname_cur)) - print("Refer to the jdiff-javadocs folder for more details") - - # ------------------------------------------------------------ - # 1) Do versioning - # 2) Auto-update version numbers in files and the_modules list - # 3) Auto-update dependencies - # ------------------------------------------------------------ - printt("Auto-detecting version numbers and changes...") - for module in added_modules: - module.set_versions(get_versions(module, source)) - for module in similar_modules: - module.set_versions(get_versions(module, source)) - - added_modules = remove_correct_added(added_modules) - the_modules = similar_modules + added_modules - print_version_updates(the_modules) - - if not dry: - printt("Auto-updating version numbers...") - update_versions(the_modules, source) - print("All auto-updates complete") - - printt("Detecting and auto-updating dependencies...") - update_dependencies(the_modules, source) - - printt("Deleting jdiff XML...") - xml_dir = os.path.abspath("./build/jdiff-xml") - print("XML successfully deleted" if del_dir(xml_dir) else "Failed to delete XML") - - print("\n--- Script completed successfully ---") - return 0 - -# Start off the script -if __name__ == "__main__": - sys.exit(main()) +# +# Autopsy Forensic Browser +# +# Copyright 2012-2013 Basis Technology Corp. +# Contact: carrier sleuthkit org +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + + +####################### +# This script exists to help us determine update the library +# versions appropriately. See this page for version details. +# +# http://wiki.sleuthkit.org/index.php?title=Autopsy_3_Module_Versions +# +# The basic idea is that this script uses javadoc/jdiff to +# compare the current state of the source code to the last +# tag and identifies if APIs were removed, added, etc. +# +# When run from the Autopsy build script, this script will: +# - Clone Autopsy and checkout to the previous release tag +# as found in the NEWS.txt file +# - Auto-discover all modules and packages +# - Run jdiff, comparing the current and previous modules +# - Use jdiff's output to determine if each module +# a) has no changes +# b) has backwards compatible changes +# c) has backwards incompatible changes +# - Based off it's compatibility, updates each module's +# a) Major version +# b) Specification version +# c) Implementation version +# - Updates the dependencies on each module depending on the +# updated version numbers +# +# Optionally, when run from the command line, one can provide the +# desired tag to compare the current version to, the directory for +# the current version of Autopsy, and whether to automatically +# update the version numbers and dependencies. +# ------------------------------------------------------------ + +import errno +import os +import shutil +import stat +import subprocess +import sys +import traceback +from os import remove, close +from shutil import move +from tempfile import mkstemp +from xml.dom.minidom import parse, parseString + +# Jdiff return codes. Described in more detail further on +NO_CHANGES = 100 +COMPATIBLE = 101 +NON_COMPATIBLE = 102 +ERROR = 1 + +# An Autopsy module object +class Module: + # Initialize it with a name, return code, and version numbers + def __init__(self, name=None, ret=None, versions=None): + self.name = name + self.ret = ret + self.versions = versions + # As a string, the module should be it's name + def __str__(self): + return self.name + def __repr__(self): + return self.name + # When compared to another module, the two are equal if the names are the same + def __cmp__(self, other): + if isinstance(other, Module): + if self.name == other.name: + return 0 + elif self.name < other.name: + return -1 + else: + return 1 + return 1 + def __eq__(self, other): + if isinstance(other, Module): + if self.name == other.name: + return True + return False + def set_name(self, name): + self.name = name + def set_ret(self, ret): + self.ret = ret + def set_versions(self, versions): + self.versions = versions + def spec(self): + return self.versions[0] + def impl(self): + return self.versions[1] + def release(self): + return self.versions[2] + +# Representation of the Specification version number +class Spec: + # Initialize specification number, where num is a string like x.y + def __init__(self, num): + self.third = None + spec_nums = num.split(".") + if len(spec_nums) == 3: + final = spec_nums[2] + self.third = int(final) + + l, r = spec_nums[0], spec_nums[1] + + self.left = int(l) + self.right = int(r) + + def __str__(self): + return self.get() + def __cmp__(self, other): + if isinstance(other, Spec): + if self.left == other.left: + if self.right == other.right: + return 0 + if self.right < other.right: + return -1 + return 1 + if self.left < other.left: + return -1 + return 1 + elif isinstance(other, str): + l, r = other.split(".") + if self.left == int(l): + if self.right == int(r): + return 0 + if self.right < int(r): + return -1 + return 1 + if self.left < int(l): + return -1 + return 1 + return -1 + + def overflow(self): + return str(self.left + 1) + ".0" + def increment(self): + return str(self.left) + "." + str(self.right + 1) + def get(self): + spec_str = str(self.left) + "." + str(self.right) + if self.third is not None: + spec_str += "." + str(self.final) + return spec_str + def set(self, num): + if isinstance(num, str): + l, r = num.split(".") + self.left = int(l) + self.right = int(r) + elif isinstance(num, Spec): + self.left = num.left + self.right = num.right + return self + +# ================================ # +# Core Functions # +# ================================ # + +# Given a list of modules and the names for each version, compare +# the generated jdiff XML for each module and output the jdiff +# JavaDocs. +# +# modules: the list of all modules both versions have in common +# apiname_tag: the api name of the previous version, most likely the tag +# apiname_cur: the api name of the current version, most likely "Current" +# +# returns the exit code from the modified jdiff.jar +# return code 1 = error in jdiff +# return code 100 = no changes +# return code 101 = compatible changes +# return code 102 = incompatible changes +def compare_xml(module, apiname_tag, apiname_cur): + global docdir + make_dir(docdir) + null_file = fix_path(os.path.abspath("./thirdparty/jdiff/v-custom/lib/Null.java")) + jdiff = fix_path(os.path.abspath("./thirdparty/jdiff/v-custom/jdiff.jar")) + oldapi = fix_path("build/jdiff-xml/" + apiname_tag + "-" + module.name) + newapi = fix_path("build/jdiff-xml/" + apiname_cur + "-" + module.name) + docs = fix_path(docdir + "/" + module.name) + # Comments are strange. They look for a file with additional user comments in a + # directory like docs/user_comments_for_xyz. The problem being that xyz is the + # path to the new/old api. So xyz turns into multiple directories for us. + # i.e. user_comments_for_build/jdiff-xml/[tag name]-[module name]_to_build/jdiff-xml + comments = fix_path(docs + "/user_comments_for_build") + jdiff_com = fix_path(comments + "/jdiff-xml") + tag_comments = fix_path(jdiff_com + "/" + apiname_tag + "-" + module.name + "_to_build") + jdiff_tag_com = fix_path(tag_comments + "/jdiff-xml") + + if not os.path.exists(jdiff): + print("JDIFF doesn't exist.") + + make_dir(docs) + make_dir(comments) + make_dir(jdiff_com) + make_dir(tag_comments) + make_dir(jdiff_tag_com) + make_dir("jdiff-logs") + log = open("jdiff-logs/COMPARE-" + module.name + ".log", "w") + cmd = ["javadoc", + "-doclet", "jdiff.JDiff", + "-docletpath", jdiff, + "-d", docs, + "-oldapi", oldapi, + "-newapi", newapi, + "-script", + null_file] + jdiff = subprocess.Popen(cmd, stdout=log, stderr=log) + jdiff.wait() + log.close() + code = jdiff.returncode + print("Compared XML for " + module.name) + if code == NO_CHANGES: + print(" No API changes") + elif code == COMPATIBLE: + print(" API Changes are backwards compatible") + elif code == NON_COMPATIBLE: + print(" API Changes are not backwards compatible") + else: + print(" *Error in XML, most likely an empty module") + sys.stdout.flush() + return code + +# Generate the jdiff xml for the given module +# path: path to the autopsy source +# module: Module object +# name: api name for jdiff +def gen_xml(path, modules, name): + for module in modules: + # If its the regression test, the source is in the "test" dir + if module.name == "Testing": + src = os.path.join(path, module.name, "test", "qa-functional", "src") + else: + src = os.path.join(path, module.name, "src") + # xerces = os.path.abspath("./lib/xerces.jar") + xml_out = fix_path(os.path.abspath("./build/jdiff-xml/" + name + "-" + module.name)) + jdiff = fix_path(os.path.abspath("./thirdparty/jdiff/v-custom/jdiff.jar")) + make_dir("build/jdiff-xml") + make_dir("jdiff-logs") + log = open("jdiff-logs/GEN_XML-" + name + "-" + module.name + ".log", "w") + cmd = ["javadoc", + "-doclet", "jdiff.JDiff", + "-docletpath", jdiff, # ;" + xerces, <-- previous problems required this + "-apiname", xml_out, # leaving it in just in case it's needed once again + "-sourcepath", fix_path(src)] + cmd = cmd + get_packages(src) + jdiff = subprocess.Popen(cmd, stdout=log, stderr=log) + jdiff.wait() + log.close() + print("Generated XML for " + name + " " + module.name) + sys.stdout.flush() + +# Find all the modules in the given path +def find_modules(path): + modules = [] + # Step into each folder in the given path and + # see if it has manifest.mf - if so, it's a module + for dir in os.listdir(path): + directory = os.path.join(path, dir) + if os.path.isdir(directory): + for file in os.listdir(directory): + if file == "manifest.mf": + modules.append(Module(dir, None, None)) + return modules + +# Detects the differences between the source and tag modules +def module_diff(source_modules, tag_modules): + added_modules = [x for x in source_modules if x not in tag_modules] + removed_modules = [x for x in tag_modules if x not in source_modules] + similar_modules = [x for x in source_modules if x in tag_modules] + + added_modules = (added_modules if added_modules else []) + removed_modules = (removed_modules if removed_modules else []) + similar_modules = (similar_modules if similar_modules else []) + return similar_modules, added_modules, removed_modules + +# Reads the previous tag from NEWS.txt +def get_tag(sourcepath): + news = open(sourcepath + "/NEWS.txt", "r") + second_instance = False + for line in news: + if "----------------" in line: + if second_instance: + ver = line.split("VERSION ")[1] + ver = ver.split(" -")[0] + return ("autopsy-" + ver).strip() + else: + second_instance = True + continue + news.close() + + +# ========================================== # +# Dependency Functions # +# ========================================== # + +# Write a new XML file, copying all the lines from projectxml +# and replacing the specification version for the code-name-base base +# with the supplied specification version spec +def set_dep_spec(projectxml, base, spec): + print(" Updating Specification version..") + orig = open(projectxml, "r") + f, abs_path = mkstemp() + new_file = open(abs_path, "w") + found_base = False + spacing = " " + sopen = "" + sclose = "\n" + for line in orig: + if base in line: + found_base = True + if found_base and sopen in line: + update = spacing + sopen + str(spec) + sclose + new_file.write(update) + else: + new_file.write(line) + new_file.close() + close(f) + orig.close() + remove(projectxml) + move(abs_path, projectxml) + +# Write a new XML file, copying all the lines from projectxml +# and replacing the release version for the code-name-base base +# with the supplied release version +def set_dep_release(projectxml, base, release): + print(" Updating Release version..") + orig = open(projectxml, "r") + f, abs_path = mkstemp() + new_file = open(abs_path, "w") + found_base = False + spacing = " " + ropen = "" + rclose = "\n" + for line in orig: + if base in line: + found_base = True + if found_base and ropen in line: + update = spacing + ropen + str(release) + rclose + new_file.write(update) + else: + new_file.write(line) + new_file.close() + close(f) + orig.close() + remove(projectxml) + move(abs_path, projectxml) + +# Return the dependency versions in the XML dependency node +def get_dep_versions(dep): + run_dependency = dep.getElementsByTagName("run-dependency")[0] + release_version = run_dependency.getElementsByTagName("release-version") + if release_version: + release_version = getTagText(release_version[0].childNodes) + specification_version = run_dependency.getElementsByTagName("specification-version") + if specification_version: + specification_version = getTagText(specification_version[0].childNodes) + return int(release_version), Spec(specification_version) + +# Given a code-name-base, see if it corresponds with any of our modules +def get_module_from_base(modules, code_name_base): + for module in modules: + if "org.sleuthkit.autopsy." + module.name.lower() == code_name_base: + return module + return None # If it didn't match one of our modules + +# Check the text between two XML tags +def getTagText(nodelist): + for node in nodelist: + if node.nodeType == node.TEXT_NODE: + return node.data + +# Check the projectxml for a dependency on any module in modules +def check_for_dependencies(projectxml, modules): + dom = parse(projectxml) + dep_list = dom.getElementsByTagName("dependency") + for dep in dep_list: + code_name_base = dep.getElementsByTagName("code-name-base")[0] + code_name_base = getTagText(code_name_base.childNodes) + module = get_module_from_base(modules, code_name_base) + if module: + print(" Found dependency on " + module.name) + release, spec = get_dep_versions(dep) + if release != module.release() and module.release() is not None: + set_dep_release(projectxml, code_name_base, module.release()) + else: print(" Release version is correct") + if spec != module.spec() and module.spec() is not None: + set_dep_spec(projectxml, code_name_base, module.spec()) + else: print(" Specification version is correct") + +# Given the module and the source directory, return +# the paths to the manifest and project properties files +def get_dependency_file(module, source): + projectxml = os.path.join(source, module.name, "nbproject", "project.xml") + if os.path.isfile(projectxml): + return projectxml + +# Verify/Update the dependencies for each module, basing the dependency +# version number off the versions in each module +def update_dependencies(modules, source): + for module in modules: + print("Checking the dependencies for " + module.name + "...") + projectxml = get_dependency_file(module, source) + if projectxml == None: + print(" Error finding project xml file") + else: + other = [x for x in modules] + check_for_dependencies(projectxml, other) + sys.stdout.flush() + +# ======================================== # +# Versioning Functions # +# ======================================== # + +# Return the specification version in the given project.properties/manifest.mf file +def get_specification(project, manifest): + try: + # Try to find it in the project file + # it will be there if impl version is set to append automatically + f = open(project, 'r') + for line in f: + if "spec.version.base" in line: + return Spec(line.split("=")[1].strip()) + f.close() + # If not found there, try the manifest file + f = open(manifest, 'r') + for line in f: + if "OpenIDE-Module-Specification-Version:" in line: + return Spec(line.split(": ")[1].strip()) + except Exception as e: + print("Error parsing Specification version for") + print(project) + print(e) + +# Set the specification version in the given project properties file +# but if it can't be found there, set it in the manifest file +def set_specification(project, manifest, num): + try: + # First try the project file + f = open(project, 'r') + for line in f: + if "spec.version.base" in line: + f.close() + replace(project, line, "spec.version.base=" + str(num) + "\n") + return + f.close() + # If it's not there, try the manifest file + f = open(manifest, 'r') + for line in f: + if "OpenIDE-Module-Specification-Version:" in line: + f.close() + replace(manifest, line, "OpenIDE-Module-Specification-Version: " + str(num) + "\n") + return + # Otherwise we're out of luck + print(" Error finding the Specification version to update") + print(" " + manifest) + f.close() + except: + print(" Error incrementing Specification version for") + print(" " + project) + +# Return the implementation version in the given manifest.mf file +def get_implementation(manifest): + try: + f = open(manifest, 'r') + for line in f: + if "OpenIDE-Module-Implementation-Version" in line: + return int(line.split(": ")[1].strip()) + f.close() + except: + print("Error parsing Implementation version for") + print(manifest) + +# Set the implementation version in the given manifest file +def set_implementation(manifest, num): + try: + f = open(manifest, 'r') + for line in f: + if "OpenIDE-Module-Implementation-Version" in line: + f.close() + replace(manifest, line, "OpenIDE-Module-Implementation-Version: " + str(num) + "\n") + return + # If it isn't there, add it + f.close() + write_implementation(manifest, num) + except: + print(" Error incrementing Implementation version for") + print(" " + manifest) + +# Rewrite the manifest file to include the implementation version +def write_implementation(manifest, num): + f = open(manifest, "r") + contents = f.read() + contents = contents[:-2] + "OpenIDE-Module-Implementation-Version: " + str(num) + "\n\n" + f.close() + f = open(manifest, "w") + f.write(contents) + f.close() + +# Return the release version in the given manifest.mf file +def get_release(manifest): + try: + f = open(manifest, 'r') + for line in f: + if "OpenIDE-Module:" in line: + return int(line.split("/")[1].strip()) + f.close() + except: + #print("Error parsing Release version for") + #print(manifest) + return 0 + +# Set the release version in the given manifest file +def set_release(manifest, num): + try: + f = open(manifest, 'r') + for line in f: + if "OpenIDE-Module:" in line: + f.close() + index = line.index('/') - len(line) + 1 + newline = line[:index] + str(num) + replace(manifest, line, newline + "\n") + return + print(" Error finding the release version to update") + print(" " + manifest) + f.close() + except: + print(" Error incrementing release version for") + print(" " + manifest) + +# Given the module and the source directory, return +# the paths to the manifest and project properties files +def get_version_files(module, source): + manifest = os.path.join(source, module.name, "manifest.mf") + project = os.path.join(source, module.name, "nbproject", "project.properties") + if os.path.isfile(manifest) and os.path.isfile(project): + return manifest, project + +# Returns a the current version numbers for the module in source +def get_versions(module, source): + manifest, project = get_version_files(module, source) + if manifest == None or project == None: + print(" Error finding manifeset and project properties files") + return + spec = get_specification(project, manifest) + impl = get_implementation(manifest) + release = get_release(manifest) + return [spec, impl, release] + +# Update the version numbers for every module in modules +def update_versions(modules, source): + for module in modules: + versions = module.versions + manifest, project = get_version_files(module, source) + print("Updating " + module.name + "...") + if manifest == None or project == None: + print(" Error finding manifeset and project properties files") + return + if module.ret == COMPATIBLE: + versions = [versions[0].set(versions[0].increment()), versions[1] + 1, versions[2]] + set_specification(project, manifest, versions[0]) + set_implementation(manifest, versions[1]) + module.set_versions(versions) + elif module.ret == NON_COMPATIBLE: + versions = [versions[0].set(versions[0].overflow()), versions[1] + 1, versions[2] + 1] + set_specification(project, manifest, versions[0]) + set_implementation(manifest, versions[1]) + set_release(manifest, versions[2]) + module.set_versions(versions) + elif module.ret == NO_CHANGES: + versions = [versions[0], versions[1] + 1, versions[2]] + set_implementation(manifest, versions[1]) + module.set_versions(versions) + elif module.ret == None: + versions = [Spec("1.0"), 1, 1] + set_specification(project, manifest, versions[0]) + set_implementation(manifest, versions[1]) + set_release(manifest, versions[2]) + module.set_versions(versions) + sys.stdout.flush() + +# Given a list of the added modules, remove the modules +# which have the correct 'new module default' version number +def remove_correct_added(modules): + correct = [x for x in modules] + for module in modules: + if module.spec() == "1.0" or module.spec() == "0.0": + if module.impl() == 1: + if module.release() == 1 or module.release() == 0: + correct.remove(module) + return correct + +# ==================================== # +# Helper Functions # +# ==================================== # + +# Replace pattern with subst in given file +def replace(file, pattern, subst): + #Create temp file + fh, abs_path = mkstemp() + new_file = open(abs_path,'w') + old_file = open(file) + for line in old_file: + new_file.write(line.replace(pattern, subst)) + #close temp file + new_file.close() + close(fh) + old_file.close() + #Remove original file + remove(file) + #Move new file + move(abs_path, file) + +# Given a list of modules print the version numbers that need changing +def print_version_updates(modules): + f = open("gen_version.txt", "a") + for module in modules: + versions = module.versions + if module.ret == COMPATIBLE: + output = (module.name + ":\n") + output += ("\tSpecification:\t" + str(versions[0]) + "\t->\t" + str(versions[0].increment()) + "\n") + output += ("\tImplementation:\t" + str(versions[1]) + "\t->\t" + str(versions[1] + 1) + "\n") + output += ("\tRelease:\tNo Change.\n") + output += ("\n") + print(output) + sys.stdout.flush() + f.write(output) + elif module.ret == NON_COMPATIBLE: + output = (module.name + ":\n") + output += ("\tSpecification:\t" + str(versions[0]) + "\t->\t" + str(versions[0].overflow()) + "\n") + output += ("\tImplementation:\t" + str(versions[1]) + "\t->\t" + str(versions[1] + 1) + "\n") + output += ("\tRelease:\t" + str(versions[2]) + "\t->\t" + str(versions[2] + 1) + "\n") + output += ("\n") + print(output) + sys.stdout.flush() + f.write(output) + elif module.ret == ERROR: + output = (module.name + ":\n") + output += ("\t*Unable to detect necessary changes\n") + output += ("\tSpecification:\t" + str(versions[0]) + "\n") + output += ("\tImplementation:\t" + str(versions[1]) + "\n") + output += ("\tRelease:\t\t" + str(versions[2]) + "\n") + output += ("\n") + print(output) + f.write(output) + sys.stdout.flush() + elif module.ret == NO_CHANGES: + output = (module.name + ":\n") + if versions[1] is None: + output += ("\tImplementation: None\n") + else: + output += ("\tImplementation:\t" + str(versions[1]) + "\t->\t" + str(versions[1] + 1) + "\n") + output += ("\n") + print(output) + sys.stdout.flush() + f.write(output) + elif module.ret is None: + output = ("Added " + module.name + ":\n") + if module.spec() != "1.0" and module.spec() != "0.0": + output += ("\tSpecification:\t" + str(module.spec()) + "\t->\t" + "1.0\n") + output += ("\n") + if module.impl() != 1: + output += ("\tImplementation:\t" + str(module.impl()) + "\t->\t" + "1\n") + output += ("\n") + if module.release() != 1 and module.release() != 0: + output += ("Release:\t\t" + str(module.release()) + "\t->\t" + "1\n") + output += ("\n") + print(output) + sys.stdout.flush() + f.write(output) + sys.stdout.flush() + f.close() + +# Changes cygwin paths to Windows +def fix_path(path): + if "cygdrive" in path: + new_path = path[11:] + return "C:/" + new_path + else: + return path + +# Print a 'title' +def printt(title): + print("\n" + title) + lines = "" + for letter in title: + lines += "-" + print(lines) + sys.stdout.flush() + +# Get a list of package names in the given path +# The path is expected to be of the form {base}/module/src +# +# NOTE: We currently only check for packages of the form +# org.sleuthkit.autopsy.x +# If we add other namespaces for commercial modules we will +# have to add a check here +def get_packages(path): + packages = [] + package_path = os.path.join(path, "org", "sleuthkit", "autopsy") + for folder in os.listdir(package_path): + package_string = "org.sleuthkit.autopsy." + packages.append(package_string + folder) + return packages + +# Create the given directory, if it doesn't already exist +def make_dir(dir): + try: + if not os.path.isdir(dir): + os.mkdir(dir) + if os.path.isdir(dir): + return True + return False + except: + print("Exception thrown when creating directory") + return False + +# Delete the given directory, and make sure it is deleted +def del_dir(dir): + try: + if os.path.isdir(dir): + shutil.rmtree(dir, ignore_errors=False, onerror=handleRemoveReadonly) + if os.path.isdir(dir): + return False + else: + return True + return True + except: + print("Exception thrown when deleting directory") + traceback.print_exc() + return False + +# Handle any permisson errors thrown by shutil.rmtree +def handleRemoveReadonly(func, path, exc): + excvalue = exc[1] + if func in (os.rmdir, os.remove) and excvalue.errno == errno.EACCES: + os.chmod(path, stat.S_IRWXU| stat.S_IRWXG| stat.S_IRWXO) # 0777 + func(path) + else: + raise + +# Run git clone and git checkout for the tag +def do_git(tag, tag_dir): + try: + printt("Cloning Autopsy tag " + tag + " into dir " + tag_dir + " (this could take a while)...") + subprocess.call(["git", "clone", "https://github.com/sleuthkit/autopsy.git", tag_dir], + stdout=subprocess.PIPE) + printt("Checking out tag " + tag + "...") + subprocess.call(["git", "checkout", tag], + stdout=subprocess.PIPE, + cwd=tag_dir) + return True + except Exception as ex: + print("Error cloning and checking out Autopsy: ", sys.exc_info()[0]) + print(str(ex)) + print("The terminal you are using most likely does not recognize git commands.") + return False + +# Get the flags from argv +def args(): + try: + sys.argv.pop(0) + while sys.argv: + arg = sys.argv.pop(0) + if arg == "-h" or arg == "--help": + return 1 + elif arg == "-t" or arg == "--tag": + global tag + tag = sys.argv.pop(0) + elif arg == "-s" or arg == "--source": + global source + source = sys.argv.pop(0) + elif arg == "-d" or arg == "--dir": + global docdir + docdir = sys.argv.pop(0) + elif arg == "-a" or arg == "--auto": + global dry + dry = False + else: + raise Exception() + except: + pass + +# Print script run info +def printinfo(): + global tag + global source + global docdir + global dry + printt("Release script information:") + if source is None: + source = fix_path(os.path.abspath(".")) + print("Using source directory:\n " + source) + if tag is None: + tag = get_tag(source) + print("Checking out to tag:\n " + tag) + if docdir is None: + docdir = fix_path(os.path.abspath("./jdiff-javadocs")) + print("Generating jdiff JavaDocs in:\n " + docdir) + if dry is True: + print("Dry run: will not auto-update version numbers") + sys.stdout.flush() + +# Print the script's usage/help +def usage(): + return \ + """ + USAGE: + Compares the API of the current Autopsy source code with a previous + tagged version. By default, it will detect the previous tag from + the NEWS file and will not update the versions in the source code. + + OPTIONAL FLAGS: + -t --tag Specify a previous tag to compare to. + Otherwise the NEWS file will be used. + + -d --dir The output directory for the jdiff JavaDocs. If no + directory is given, the default is jdiff-javadocs/{module}. + + -s --source The directory containing Autopsy's source code. + + -a --auto Automatically update version numbers (not dry). + + -h --help Prints this usage. + """ + +# ==================================== # +# Main Functionality # +# ==================================== # + +# Where the magic happens +def main(): + global tag; global source; global docdir; global dry + tag = None; source = None; docdir = None; dry = True + + ret = args() + if ret: + print(usage()) + return 0 + printinfo() + + # ----------------------------------------------- + # 1) Clone Autopsy, checkout to given tag/commit + # 2) Get the modules in the clone and the source + # 3) Generate the xml comparison + # ----------------------------------------------- + if not del_dir("./build/" + tag): + print("\n\n=========================================") + print(" Failed to delete previous Autopsy clone.") + print(" Unable to continue...") + print("=========================================") + return 1 + tag_dir = os.path.abspath("./build/" + tag) + if not do_git(tag, tag_dir): + return 1 + sys.stdout.flush() + + tag_modules = find_modules(tag_dir) + source_modules = find_modules(source) + + printt("Generating jdiff XML reports...") + apiname_tag = tag + apiname_cur = "current" + gen_xml(tag_dir, tag_modules, apiname_tag) + gen_xml(source, source_modules, apiname_cur) + + printt("Deleting cloned Autopsy directory...") + print("Clone successfully deleted" if del_dir(tag_dir) else "Failed to delete clone") + sys.stdout.flush() + + # ----------------------------------------------------- + # 1) Seperate modules into added, similar, and removed + # 2) Compare XML for each module + # ----------------------------------------------------- + printt("Comparing modules found...") + similar_modules, added_modules, removed_modules = module_diff(source_modules, tag_modules) + if added_modules or removed_modules: + for m in added_modules: + print("+ Added " + m.name) + sys.stdout.flush() + for m in removed_modules: + print("- Removed " + m.name) + sys.stdout.flush() + else: + print("No added or removed modules") + sys.stdout.flush() + + printt("Comparing jdiff outputs...") + for module in similar_modules: + module.set_ret(compare_xml(module, apiname_tag, apiname_cur)) + print("Refer to the jdiff-javadocs folder for more details") + + # ------------------------------------------------------------ + # 1) Do versioning + # 2) Auto-update version numbers in files and the_modules list + # 3) Auto-update dependencies + # ------------------------------------------------------------ + printt("Auto-detecting version numbers and changes...") + for module in added_modules: + module.set_versions(get_versions(module, source)) + for module in similar_modules: + module.set_versions(get_versions(module, source)) + + added_modules = remove_correct_added(added_modules) + the_modules = similar_modules + added_modules + print_version_updates(the_modules) + + if not dry: + printt("Auto-updating version numbers...") + update_versions(the_modules, source) + print("All auto-updates complete") + + printt("Detecting and auto-updating dependencies...") + update_dependencies(the_modules, source) + + printt("Deleting jdiff XML...") + xml_dir = os.path.abspath("./build/jdiff-xml") + print("XML successfully deleted" if del_dir(xml_dir) else "Failed to delete XML") + + print("\n--- Script completed successfully ---") + return 0 + +# Start off the script +if __name__ == "__main__": + sys.exit(main())