Wednesday, April 16, 2014

Daily Blog #297: TriForce beta ends in two weeks

Hello Reader,
       For those of you hoping to try the full version of the Triforce Advanced NTFS Journal Parser before the beta ends you have two weeks left to do so. You can sign up for the beta here:
https://docs.google.com/forms/d/1GzOMe-QHtB12ZnI4ZTjLA06DJP6ZScXngO42ZDGIpR0/viewform

Which will provide you with a link to the this months beta. We have another beta version being sent out this week so you can test the new filter features we showed on the forensic lunch on friday and general stability. In addition now that the rules engine is finished we will be pushing out and testing our full signatures for automatic detection of wipes, timestamp changes, cd burns, deletions, ntfs-3g writes and other interesting things.

In addition you should know that we plan to take care of our beta testers once the beta ends. We appreciate the testing and feedback that we've received and will be offering a one time beta presale to only those who participated in the beta. You should also know that we don't plan to make this a dongle protected product and instead will be licensing this with a license file with offline activation so that you can run the Triforce on any system even within an air gaped lab.

We will have a full product website up soon and a FAQ for questions a long with a real manual, video tutorials and more! So my suggestion would be that if you haven't signed up for the beta that you should do so. We have a long development road map ahead and are very excited on where we plan to take this tool and technology.

Tuesday, April 15, 2014

Daily Blog #296: Domain lastlogin timestamps and tracking recon

Hello Reader,
         Normally I would save something like this for Saturday Reading but the following two articles brought enough interesting thoughts that they deserved their own post. The first is from 2009 and if you are doing any work within an Active Directory environment deserves your attention, read it here:
http://blogs.technet.com/b/askds/archive/2009/04/15/the-lastlogontimestamp-attribute-what-it-was-designed-for-and-how-it-works.aspx

What's important to take away from this article is that Active Directory since the introduction of Windows 2003 Domain Function Level provides a synchronized timestamp across domain servers called 'lastlogontimestamp'. This first of the two articles we will talk about today is important if you are interested in two things:

1. When the last time all users interacted with the domain, the article specifies levels of interaction as interactive, network, and service logins. However as you'll see in the second article that isn't exactly true. However what is going to be true is that this timestamp is sync'd across all domain controllers within the domain.
2. That the timestamp itself does not reflect the actual true last logon time of a user. It just reflects if that user account has logged in within the last 14 days.

The second part may seem like it makes the first part and its caveat less useful but I would say to you don't give up on it just yet. Take a look at the second article linked here:
http://blogs.technet.com/b/askpfeplat/archive/2014/04/14/how-lastlogontimestamp-is-updated-with-kerberos-s4u2self.aspx

where we learn that not only is the timestamp synchronized across domain controllers but applications that are trying to query across the domain count towards this logon event. So why is this useful to us? What should you taken away from this?

1. If an attacker or a rogue insider is trying to gather intelligence across you enterprise by querying different groups, service accounts and the like that it will in fact trigger a logon event that will change the lastlogon timestamp. This means that if you quickly preserve these timestamps today you can find the last logon values of service accounts that don't do domain logins. Then moving forward if that timestamp changes you can act on it to determine who is suddenly enumerating information about your domain.
2. That if you extended auditing rules setup, as mentioned in the second article, you can catch which account is actually going out and enumerating. The enumeration does not require that the account in question has the credentials of the resource being queried and does not actually login into the account. Instead its just a by product of how the querying is being done within the api itself.

So there you go, Internal IR people go petition IT to change your Domains by adding additional event logging and preserving the current state of your lastlogon timestamps now to take advantage of this to its full extent.
Consultants start querying this across the domain to start getting additional intelligence regarding when recon may have taken place. If you start noticing that a large number of old service accounts and unused accounts all have recent last logon timestamps that should be a clue that this timestamp may relate to some domain recon within the environment.

Let me know your thoughts!

Monday, April 14, 2014

Daily Blog #295: Sunday Funday 4/13/14 Winner!

Hello Reader,
           Another Challenge has ended and we had a range of very good answers this week. I never know what to expect when I throw out a question. I thought this weeks question would get some responses but I didn't expect how much information was being submitted in the answers. You all certainly know your SQLite forensics! With that said, Andrew Case this week was the clear winner. Not only did he cite specific code that determine the behavior of deletion within the SQLite source he also gave full references to further reading. Well done Andrew, you are a Sunday Funday winner!

The Challenge:
  SQLite is becoming one of the most common application databases used across multiple operating systems and devices. As DFIR analysts we love SQLite for its ability to preserve deleted data. For this challenge let's see how well you understand why this rich deleted data set exists. Answer the following questions.
1. Why are deleted records accessibly in SQLite databases
2. What is the write ahead journal
3. What will cause deleted records to be overwritten

Winning Answer:
Andrew Case, @attrc

1. Why are deleted records accessibly in SQLite databases

Deleted records are often accessible in SQLite databases due to the default mode of Sqlite not performing secure deletion. This can be seen by following the code path for the handler of DELETE queries/operations inside the database. To start, the function sqlite3GenerateRowDelete inside of src/delete.c can be analyzed. In this function, on line 675 of delete.c [1], the following code is called:

sqlite3VdbeAddOp2(v, OP_Delete, iDataCur, (count?OPFLAG_NCHANGE:0));

This function calls into Sqlite’s vdbe [2] (virtual database engine) in order to trigger an OP_Delete operation. The handler for OP_delete can be found on line 4144 of vdbe.c [3]. Inside this function, the data from the file on disk is deleted through a call to:

rc = sqlite3BtreeDelete(pC->pCursor);

sqlite3BtreeDelete is implemented inside of btree.c on line 7111 [4]. To remove an individual record (row/column) from the database file, clearCell and dropCell are called. dropCell is responsible for unlinking the record from the tree and does not touch the record’s actual data. clearCell’s behavior depends on if the secure_delete pragma [5] is set on the database being affected or if it is set globally. If it is set in either of these cases, then the data is overwritten with zeroes through a call to memset. By default this flag is NOT set though and the cell’s contents are not altered at all.

This default setting of secure_delete to off is the key aspect as to why records are recoverable. Since a cell’s contents are not securely deleted by default, and no modern mainstream applications set the flag, the remnant data is easily recoverable through forensics analysis.

An aside: It is possible to programmatically recover deleted records by walking the database’s freelist of pages. This list is populated by records that have been deleted and can be re-allocated during new writes to the database. A blog post on Linux Sleuthing gives an algorithm to accomplish this [7].

References
[1] http://repo.or.cz/w/sqlite.git/blob/f6ae24a0e5c5c5d22770ab70992dfab6b9d6fc5e:/src/delete.c#l675
[2] http://www.sqlite.org/vdbe.html
[3] http://repo.or.cz/w/sqlite.git/blob/f6ae24a0e5c5c5d22770ab70992dfab6b9d6fc5e:/src/vdbe.c#l4144
[4] http://repo.or.cz/w/sqlite.git/blob/f6ae24a0e5c5c5d22770ab70992dfab6b9d6fc5e:/src/btree.c#l7111
[5] http://www.tutorialspoint.com/sqlite/sqlite_pragma.htm
[6] http://repo.or.cz/w/sqlite.git/blob/f6ae24a0e5c5c5d22770ab70992dfab6b9d6fc5e:/src/btree.c#l5341
[7] http://linuxsleuthing.blogspot.com/2013/09/recovering-data-from-deleted-sqlite.html

2. What is the write ahead journal?

The write ahead journal [1] is used as a replacement to Sqlite’s old method of database journaling. In the WAL model, when contents are written to the database, instead of going directly to the database file, they instead go to the WAL file. This means that the current database contents are actually in the WAL file, and the database is partially holding outdated content. The WAL file may hold multiple records for the same records in the database. These are differentiated by a ‘salt’ value that increments with each operation. Information in the WAL file is flushed into the database file when a checkpointing [1] operation occurs.

The forensics implications of the write ahead journal, including timelining historical content based on the salt value, can be found at [2]. This blog post is a highly, highly informative post on the topic.

References
[1] https://www.sqlite.org/wal.html
[2] http://www.cclgroupltd.com/the-forensic-implications-of-sqlites-write-ahead-log/

3. What will cause deleted records to be overwritten

There are three main causes for deleted records to be overwritten. The first was explained in the answer to question 1 (secure delete flag being set and records are overwritten with zeroes as soon as the DELETE FROM … is executed).

The second cause is for entries on the freelist (see answer to question 1) to be re-used during new writes. This “old data blocks are available until re-allocated” cause is exactly the same issue faced when attempting to recover deleted files from file systems. File system drivers also keep free lists of blocks and until those blocks are re-allocated and overwritten then the file data is recoverable.

The third issue is database vacuuming [1]. The forensics implications of this are nicely explained in [2]. Vacuuming rewrites the database into a new file and removes all of the freed records from the database. This causes the database file to shrink and the old records to be placed into the unallocated storage of the file system.

References
[1] https://sqlite.org/lang_vacuum.html
[2] http://linuxsleuthing.blogspot.com/2011/02/recovering-data-from-deleted-sql.html

Saturday, April 12, 2014

Daily Blog #294: Sunday Funday 4/13/14

Hello Reader,
              It's Sunday and time for another challenge of your DFIR skills and knowledge. If you watched the Forensic Lunch on Friday you heard David Dym talk about his new tool SQLiteDiver. With all the challenges we've done over the last 294 posts we've never gone into depth into one of the most common forensic artifact locations both on mobile devices and standard systems. Good luck and give it your best shot, you never can tell when your answer could be the only submission!       

The Prize:
A $200 Amazon Gift Card



The Rules:
  1. You must post your answer before Monday 4/14/14 8AM CST (GMT -5)
  2. The most complete answer wins
  3. You are allowed to edit your answer after posting
  4. If two answers are too similar for one to win, the one with the earlier posting time wins
  5. Be specific and be thoughtful 
  6. Anonymous entries are allowed, please email them to dcowen@g-cpartners.com. Please state in your email if you would like to be anonymous or not if you win.
  7. In order for an anonymous winner to receive a prize they must give their name to me, but i will not release it in a blog post

The Challenge:
 SQLite is becoming one of the most common application databases used across multiple operating systems and devices. As DFIR analysts we love SQLite for its ability to preserve deleted data. For this challenge let's see how well you understand why this rich deleted data set exists. Answer the following questions.
1. Why are deleted records accessibly in SQLite databases
2. What is the write ahead journal
3. What will cause deleted records to be overwritten

Friday, April 11, 2014

Daily Blog #293: Saturday Reading 4/12/14

Hello Reader,
                It's Saturday! One week behind you, another week ahead. In between those two events let's focus on what we can learn to make next week even better. Here are more links to make you think in this week's Saturday Reading.

1. If it's the first link of the week it must be the forensic lunch! This week we had:

Anthony Di Bello from Guidance Software talking about CEIC. CEIC is our industries biggest conference and we will be there. If you are interested go here http://www.guidancesoftware.com/ceic/Pages/about-ceic.aspx and follow them on twitter @encase

David Dym talking about his upcoming talk on SQLite forensics at CEIC and the early release of a new tool called SQLiteDiver which comes in GUI and CLI forms. You can download SQLiteDiver here: http://www.easymetadata.com/Downloads/SQLiteDiver/ and you can see Dave talk about it and SQLite forensics at CEIC!

You can watch it here: https://www.youtube.com/watch?v=ZEXnP34jf1I&list=UUZ7mQV3j4GNX-LU1IKPVQZg

2. There's a new blog in town, Jan Verhulst's 4ensics.net. He's written a good post on report writing, and a couple things before that, that I think you should take a look at here: http://www.4ensics.net/home/2014/4/2/r8nqt1isgo3lvaxtbcx7xy8iyqu6uq. Thanks to Jan who let me know he started a blog so I can have more sources to review! If you are getting ready to put out research, let me know! I want to help you get your work the most exposure possible.

3. Richard Drinkwater has made a new post on his blog 'Forensics from the sausage factory'. I've always enjoyed Richard's blog and his great analysis, this weeks entry is no different. Richard is facing a common scenario that many of us face, receiving an image without access to the original machine it came from. He did the work to determine the plist that would allow him to know if automatic time syncing via NTP was enabled on OSX. If you get a OSX image in and want to know if the timestamps are accurate this is worth a read, http://forensicsfromthesausagefactory.blogspot.com/2014/04/mac-os-x-set-date-and-time-automatically.html.

4. Jake Williams has a post up on the SANS blog with all of his Heartbleed slides, notes a link to his webcast on the subject. Heartbleed is going to be a ongoing problem for years to come so it would be wise to get up to date on it now, http://digital-forensics.sans.org/blog/2014/04/10/heartbleed-links-simulcast-etc.

5. Chad Tilbury has also a new post up on the SANS blog, his is all about how to use the new CrowdStrike tool CrowdResponse. In reading through the post its clear that this is powerful tool for large scale yara scanning of systems. Make sure to give this a read http://digital-forensics.sans.org/blog/2014/04/09/signature-detection-with-crowdresponse.

6. Andrew Case has a new post up on the Volatility labs blog this week showing how to build a decoder for a piece of shellcode http://volatility-labs.blogspot.com/2014/04/building-decoder-for-cve-2014-0502.html. If you are trying to become a better malware reverser you should reread this a couple dozen times.

7. Harlan Carvey's latest edition of Windows Forensic Analysis is out this time with a focus on Windows 8 forensics. I own most of Harlan's books and always appreciate the work he puts into making them such a good reference guide going forward, you can buy it here http://www.amazon.com/Windows-Forensic-Analysis-Toolkit-Edition/dp/0124171575/.

8. 'Chip_DFIR' is a blog that I just found thanks to the #dfir hash tag on twitter this week. Chip has put a two part post, with the second part posted this week, covering how to recover and analyze deleted Chrome cache artifacts and metadata. You can read it here http://chipdfir.blogspot.co.uk/2014/04/chrome-cache-wheres-stash-part-2.html.

9. Sketchymoose blog has a new post up on using a Live USB boot drive to deal with encrypted drives with drive locked systems, http://sketchymoose.blogspot.co.uk/2014/04/creating-live-usbcd-for-whatever-reason.html. Always good to see good posts showing what people have learned from work in the field.

Daily Blog #292: Forensic Lunch 4/11/14

Hello Reader,
         We had a few audio issues today that you'll hear in the recording. The good news is that it didn't effect our guest Anthony Di Bello from Guidance Software and we cleared it up in the last half of the show. This week we had:

Anthony Di Bello from Guidance Software talking about CEIC. CEIC is our industries biggest conference and we will be there. If you are interested go here http://www.guidancesoftware.com/ceic/Pages/about-ceic.aspx and follow them on twitter @encase

David Dym talking about his upcoming talk on SQLite forensics at CEIC and the early release of a new tool called SQLiteDiver which comes in GUI and CLI forms. You can download SQLiteDiver here: http://www.easymetadata.com/Downloads/SQLiteDiver/ and you can see Dave talk about it and SQLite forensics at CEIC!


Thursday, April 10, 2014

Daily Blog #291: PFIC 2014

Hello Reader,
         If you've ever wanted to do hands on Journal analysis with me you'll have your first chance this year at PFIC, the Paraben Forensics Innovation Conference. I'll be doing a 90 minute lab on USN Journal analysis that will use both my tools and others to:
  • Explain USN fundamentals
  • Walk through analysis scenarios
  • Spot false positives
  • Test and validate findings
The USN Journal is quickly becoming one of my smoking guns for all sorts of great proof that I otherwise would never have proven. If you want to spend a week at a Ski resort in Utah in November,(tough life I know right?) then come join me and others at PFIC!

PFIC is going to be different this year as they are experimenting again with the format. Last year they had different tracks you could pick but you couldn't see all the content in all the tracks. This year they have two tracks, basic and advanced You will be in a group of 40 that will move through all the content of that track together across three days. I will get three different groups of 40 across three days for 90 mins each day.

It's a neat concept and I am looking forward to seeing how it works out.