Chrome forensics. Processing the history file with Perl

Hi again,

In line with the previous post, we’re going to talk again about forensics scripting using Perl and SQLite. However, this time we are going to use a new example: the Google Chrome history processor.

I’ve uploaded the script to my recently created Google Code site, so everyone can easily get the source and play with it. I like this example because, in comparison to the last example we talked about (the Mozilla Firefox downloads processor) it brings more functionality and includes extra features that may be useful for your investigative processes.

The new script uses Spreadsheet::WriteExcelXM to perform Excel XML outputs than can be easily loaded with Excel or Libreoffice. It also generates a double comma separated plain text file, which is used as the source to build the XML report. Some people will find useful the plain text one, some others prefer to load the XML in their favourite calc software. It’s really up to you. Double comma is used to prevent distortions with single commas in the URL title, but feel free to modify and use your favourite delimiter.

One interesting feature of Chrome’s history is the storage of transition types. Our script automatically transforms the values of the different transition values stored in the visits table inside the history SQLite file. These values have to be logically AND with something called CORE MASK, initially set to 0xFF, to obtain a number between 0 an 10 that describes the transition type. The original source code of Chrome is self explanatory, so check the SVN trunk for further details: The transformation is coded below:

        my $core_mask = 0xff;
        $transition = $core_mask & @row[10];

        switch ($transition) {
		case 0		{ $transition_text = "LINK: User reached page by clicking a link on another page"}
		case 1		{ $transition_text = "TYPED: User typed page in URL bar"}
		case 2		{ $transition_text = "AUTO BOOKMARK: User got to this page through a suggestion in the UI"}
		case 3		{ $transition_text = "AUTO SUBFRAME: Content automatically loaded in a non-toplevel frame"}
		case 4		{ $transition_text = "MANUAL SUBFRAME: Subframe navigation explicitly requested by the user"}
		case 5		{ $transition_text = "GENERATED: User typed page in the URL bar and selected an entry that did not look like a URL"}
		case 6		{ $transition_text = "START PAGE: Page was specified in the command line or is the start page"}
		case 7		{ $transition_text = "FORM SUBMIT: User filled out values in a form and submitted it"}
		case 8		{ $transition_text = "RELOAD: User reloaded the page"}
		case 9		{ $transition_text = "KEYWORD: URL generated from a replaceable keyword other than the default search provider"}
		case 10		{ $transition_text = "KEYWORD GENERATED: Visit was generated by a keyword "}
		else		{ print "Unable to understand the transition value. Check, something is horribly wrong here :)" }

The script, explained

Core operation in the script consists of a SQL SELECT statement that will retrieve data from two tables inside the history file: urls and visits. Of course, as we want correlated information and not scattered data, we will grab the information only when the id value in the urls table matches the url value in the visits table, which is the numerical value that matches the id in the urls table for each visit event.

SELECT, urls.url, urls.title, urls.visit_count, urls.typed_count, urls.last_visit_time, urls.hidden, urls.favicon_id, visits.visit_time, visits.from_visit, visits.transition, visits.segment_id, visits.is_indexed FROM urls, visits WHERE = visits.url

Once this execution is prepared, we can loop and generate the text file, transforming the timestamps into human readable values, and turning the transition values into transition events, as explained before. Each event is explained in the code. This text file will then be converted into an XML with the following fields:

ID: It’s a primary key for cross table references. Stored in the urls table
URL: The stored visited URL
TITLE: Page title
VISIT COUNT: Total visit count for the given URL
TYPED COUNT: Number of times the URL was typed
LAST VISIT TIME: Timestamp of the last visit, as stored in the urls table
VISIT TIME: Timestamp of the visit, as stored in the visits table
IS HIDDEN: Indicates if the URL is displayed by the autocomplete function. 0 means it will be displayed, 1 means it will remain hidden
FAVICON ID: Used to reference the favicon table
FROM VISIT: Used to reference the URL the visit came from. Will show 0 if no reference is in place
TRANSITION: Shows the transition type
SEGMENT ID: Stores the segments value, used in cross table reference
IS INDEXED: Determines if there’s indexed data for this visit. 1 there is, 0 there isn’t

I’ve uploaded an example of both outputs here: plain text report and XML report.

Hope this may be useful for you :D

With regards,

Using Perl and SQLite in digital forensics: Firefox Download Manager as an example


In computer forensics there are some quite relevant artifacts used to extract information: those related to Web browsers. In modern IT, most of the activity in user desktops happens in the browsers, so it becomes crucial to have a good insight of the operations carried on them in order to be able to use the information in our processes.

In this article we will discuss the feasibility of using Perl to perform scriptable forensics in browsers. We are going to use as an example Mozilla Firefox under Linux, as it uses SQLite databases to store browser activity related information. That will give us the opportunity to use Perl’s DBI module to perform console database queries. Please note that the described process and the code can be easily ported to Windows, so feel free to play around with it, reporting back your experience if you so wished. My recommendation, if you plan to perform testing in Windows, is to use ActivePerl from Activestate.

A little bit of background

Once installed, Firefox creates a set of folders and files under /home/username/.mozilla/firefox. One of them is the profiles.ini, which contains some start up information such as the profile to be used, stored in the Path variable:

mozilla sqlite forensics

Inside the profile directory we can easily locate several SQLite files we can use in our investigative process. As an example, we will use the downloads.sqlite file. Since the introduction of Firefox 3, this file stores information regarding download operations performed in Firefox. The Download Manager uses this SQLite table to store and track downloads in progress, queued and finished downloads. However, any of the SQLite files could be used modifying the statements in the example code according to the documentation for each schema. Note that Firefox is an open source project, therefore, apart from available code to inspect, almost everything is documented in their development resources, including but not limited to database schema layouts, naming conventions and variable formats.

mozilla sqlite forensics

Using the sqlite3 toolset we can easily inspect the tables in the schema, enabling the headers print for the table queries :

mozilla sqlite forensics

This file does only have one table, called moz_downloads with the following columns: id, name, source, target, tempPath, startTime, endTime, state, referrer, entityID, currBytes, maxBytes, mimeType, preferredApplication, preferredAction and autoResume. We will use this information for our script in a little while. As explained before, those are documented in Mozilla’s dev sites, so feel free to browse the documentation and find out the purpose of each element.

Using Perl to retrieve the contents

One of the main advantages of using Perl in the forensic practice is that it allows the investigators to script operations with excellent capabilities when dealing with regular expressions, making languages such as Python and Perl very attractive for these purposes. The code we are going to use is the following:


use DBI;

    my $dbh = DBI->connect("dbi:SQLite:dbname=/home/shernando/.mozilla/firefox/fczslsx1.default/downloads.sqlite","","");
    if ($dbh->err()) { die "$DBI::errstr\n"; }

    $sth = $dbh->prepare("select * from moz_downloads");
    while (@row = $sth->fetchrow) {

    print "Download ID: "."@row[0]\n";
    print "File name: "."@row[1]\n";
    print "Download URL: "."@row[2]\n";
    print "Local file name: "."@row[3]\n";  
    print "Temp path (if available): "."@row[4]\n";

    # Firefox uses PRTime time formats, which are 64-bit integer number of microseconds since Midnight 1 January 1970 UTC#
    # We can invoke Unix / POSIX dividing by 1,000,000 #

        $starttime = localtime(@row[5]/1000000);
        $endtime = localtime(@row[6]/1000000);

    print "Start time: "."@row[5]"." "."$starttime\n";
    print "End time: "."@row[5]"." "."$endtime\n";
    print "State: "."@row[7]\n"; 
    print "Referer: "."@row[8]\n";  
    print "Entity ID: "."@row[9]\n";
    print "Current bytes: "."@row[10]\n"; 
    print "Max. bytes: "."@row[11]\n";
    print "Mime type: "."@row[12]\n";     
    print "Preferred application: "."@row[13]\n";  
    print "Preferred action: "."@row[14]\n";  
    print "Auto Resume: "."@row[15]\n"; 
    print "-------------------------------\n";



Please note the transformation required to bring PRTtime to Unix/POSIX in order to make human readable the start and end dates. Needless to say, make sure you declare an adequate path for your SQLite file, otherwise the script will not render any results.

The script in action

As seen in the previous sqlite3 query, there are 3 downloads in the inspected system. First two are completed regular downloads:

mozilla sqlite forensics

Third download is a slightly different case:

mozilla sqlite forensics

The State: 4 indicates this is a paused download, and this obviously generates a temporary file and an entity ID that is used to resume the download after it’s been paused. Of course the current bytes count differs from the max. bytes, as the download is not completed.

Hope this information is useful for your investigative processes. In my particular case, I find that writing these quick and dirty scripts makes my life much easier :)

Comments and questions are welcome.

With regards,