Mac OSX Safari Autopsy Plugin

markmckinnon
3 min readJan 31, 2017

So it has been about a month since my last talking about the creating Autopsy python plugins. Since then I have been thinking about what plugins I can create and one of them that jumped out at me was a plugin to parse Mac OSX Safari data.

Looking at the Safari on Mac OSX the following files can be parsed to get information from the Safari Browser. The information is stored in the the following directory /Users/<user id>/Library/Safari.

Web history — History.db
Downloads — Downloads.plist
Bookmarks — Bookmarks.plist
LastSession — LastSession.plist
Topsites — TopSites.plist
RecentlyClosedTabs — RecentlyClosedTabs.plist

We will start with the web history first. Looking at the History.db file you may see that it has a WAL file associated with it. Based on having a WAL file this needs to be extracted along with the History.db file. If you do not then you may not get all the information in the web history as there maybe data stored in the WAL file. Once the History.db and WAL file has been exported to the Autopsy temp directory then each History.db file can be opened and data selected out for each user on the system. The following SQL is used to export the data for the TSK_WEB_HISTORY extracted content.

Select a.url ‘URL’, b.visit_time + 978307200 ‘Date_Accessed’, c.url ‘Referrer_URL’, b.title ‘Title’, ‘Safari’ ‘Program_Name’ from history_visits b
left join history_items a on a.id = b.history_item left join history_items c on c.id = b.redirect_source;

You will notice that the visit time has 978307200 added to it. This is the difference between the Unix epoch time and the Mac OSX epoch time. You will also notice that there are two (2) left outer joins, this is so the URL and referrer URL can be obtained with out missing any data that may not exist.

Once this query is run then you can add each row of data (attributes) to the TSK_WEB_HISTORY artifact. The attributes from the select statement will be added as well as two (2) other attributes that cannot be created via SQL, domain and user. Once an artifact has been created the URL for that artifact will be checked to see if it has any search data in it. If it does then it will be added to the artifact TSK_WEB_SEARCH_QUERY. Once all the artifacts have been added for each users web history then an event is fired to let the Autopsy UI know that new data has been added and to update itself.

The next artifacts to create will be based on the plist files. Each plist will need to be parsed using an external program that is a modified version of the plist2db.py program created by cheeky4n6monkey. The modified program will put the plists into a more usable format that can be read from the SQLite database that is created. The data will then be extracted from SQLite and inserted into there respective artifacts. Two (2) of the artifacts already exist and they are TSK_WEB_DOWNLOAD and TSK_WEB_BOOKMARK. Three (3) new artifacts maybe created based on the data extracted and they are TSK_SAFARI_LASTSESSION, TSK_SAFARI_RECENTLYCLOSED and TSK_SAFARI_TOPSITES. There are also several attributes that will be created as well. You can look at the code to see what they are.

When you execute the plugin you can see the extracted content that is created. I have uploaded the initial plugin to my github account under Autopsy Plugins. It can be found here. There is some further work and testing needed but it is a start of the first plugin I have created this year. Comments and suggestions are all ways appreciated. Enjoy.

--

--