iOS Unified Logs - My parsing tool is out!
- Lionel Notari
- 19 mai
- 14 min de lecture
About a year ago, I released my first iOS forensic tool, designed to extract unified logs from any iOS device in a forensically sound way.
Since then, I’ve continued to work (hard) on the next step: building a parser to investigate these logs more efficiently. I’ve been working on this for several months, ot not say year, alongside my research and the articles I publish on this blog to document iOS unified logs. It wasn’t always easy. There were some busy periods when I didn’t have much time to move forward. But I think I’ve now reached a version I’m happy with, and I hope it can be useful to others too.
The tool includes several features, which I’ll try to explain as clearly as possible in this article. I truly believe it can drastically reduce the time needed to investigate unified logs and make the process much easier. In short: it lets you go from a full logarchive containing millions of envents to a filtered database and a forensic report (in text format) in just a few minutes.
That said, let’s be clear right away, the tool runs on macOS, since it uses commands that are only available on this operating system..
1. Why a parser?
As many of you (probably) know, the Mac Console is usually the tool I recommend for investigating iOS unified logs. While it’s not the most user-friendly or efficient interface, it does allow you to open a .logarchive and search through it quite easily. However, analyzing more than 20 million logs in the Console can take a lot of time.
There’s another limitation: it means staying on macOS. For now, parsing a .logarchive directly on Windows is still not really possible, and I know that’s not ideal for everyone. That’s why having a tool that can automatically export the logs from a .logarchive into a database seems really important. Once we have the database, we can work with it more easily, even on Windows.
That said, it’s important to take a few precautions to make sure the conversion doesn’t change anything in the logarchive. For this reason, the script runs several quality checks, and the results are included in the forensic report allowing to perform a deep verification of the conversion. It is, I think, really really really iimportant!
2. What the tool looks like
The tool is written in Python and, once launched by the user, it looks like this:

We can see that the tool allows the following:
Enter basic case information and the name of the investigator who performs the analysis.
Select the .logarchive file to parse.
Choose a specific date and time range to limit the logs that will be exported into the database. While this option has some limitations (which we’ll discuss later), it can significantly speed up the export and parsing process. It’s definitely a useful option in many situations.
The tool is able to parse a large number of log types (even some that I haven’t had time to write articles about yet…). But the user can also import their own list of log types to extend the parsing functionality!
The filtered database can also be exported as a CSV file, if needed.
Finally, the user selects the output folder and chooses a name for the generated database.
3. How the script works
The first step of the script is to export all the events contained in the logarchive into a JSON file. The log show command allows us to do this easily. The mention of “all events” is very important here, because the tool does not only export logs, and this is crucial from a forensic point of view.
The tool extracts all the available events from the logarchive and writes them into a JSON file that can be several gigabytes in size! Make sure you have at least 40GB of free space before running the script. The JSON file and the resulting database will both be quite large.
What happens once the JSON file is created? Well, the script reads (parses) this JSON file and inserts the events into a database, it’s that simple. However, there are two key additions I should mention:
Surprisingly, a log exported in JSON format does not directly include the name of the process that generated it. The script extracts this information from another field and writes it into a dedicated process column in the database. A quality check is then performed: the process statistics from the logarchive (obtained with the log stats command) are compared with the ones from the database. I explain this more in the “Forensic Report” section.
Secondly, I found it necessary to assign a unique identifier to each log. This ensures that the logs can always be sorted back in their original order in a reliable way. This unique ID, also included in the filtered database, makes it easier to trace and analyze logs. (I’ll explain the filtered database in more detail later on.)
From the JSON file, the script extracts the following fields for every log entry. To be honest, I selected the fields that seemed the most relevant to me:
timestamp
machTimestamp
bootUUID
messageType
eventType
processImagePath
processID
threadID
subsystem
category'
activityIdentifier
eventMessage
Adding the two fields created by the tool (log_number and process), the complete database contains 14 fields. For example:

Here, we can recognize the messageType field (Default, Info, Error, etc.) and the eventType field (logEvent, Activity, etc.) that I mentioned earlier in the article about statistics, see here: article
Finally, it’s worth noting that the tool automatically creates an INDEX on the process and eventMessage fields to improve performance.
CREATE INDEX idx_lower_proc_event ON "iOS Unified Logs - General" (LOWER(process), LOWER(eventMessage))
4. THE filtered database
Getting this first complete database is already a great achievement, it can be transferred to a Windows machine, for example, to perform SQL searches. However, the database can easily contain millions of logs and several gigabytes, which isn’t always easy to work with.
That’s why the tool automatically performs a second parsing step to extract specific logs and store them in a second database called “filtered.”
All the filters used to extract relevant logs are based on my research and the articles I’ve published. Some of these logs haven’t been mentioned directly in my articles to avoid overloading them, so don’t be surprised if you come across unknown entries. Some examples include:
Silent mode
Volume
Notifications
Camera
Date/Time change
Brightness
Battery
I’ll go over some of these categories in the rest of this article, and in future posts as well. In the filtered database, a new column appears on the far right: label. This column groups logs by category, based on the type of activity they reflect. It’s meant to help organize the logs and make the analysis more efficient. These labels are based on my own research and experience, and should be seen as informative guidance.
Here’s the current list of labels automatically assigned by the parser:
Brightness
Battery
Bluetooth
Boot/Shutdown
Touchscreen
WiFi
Orientation
Airplane Mode
App State
HomeScreen/App Switcher
Date/Time
Volume
Notification
Audio Output
Lock/Unlock
Control Center
Gesture
Flashlight (thanks to Christian Peter for the flashlight logs)
Notification
Siri
Motion
Camera
Scroll
Keyboard
Today view/Widget
Call
These different labels are meant to give you a quick idea of what the unified log entry refers to. They help you understand at a glance the kind of activity being recorded. Below is an example of a filtered database:

The log_number column on the far left matches the one used in the full database. So if you find an interesting log entry and want to investigate further, you can easily go back and explore the full database around that point.
Labels are also useful for quickly searching logs of interest. For example:
Label Boot/Shutdown:

Label Lock/Unlock:

5. New unified logs not documented yet !
As you can see from the categories above, I’ve added the parsing of some logs that haven’t been documented yet on my website. That’s expected, if I had waited to document every log first, this tool would never have been released. So here are a few quick examples for these new categories:
Battery
Process | Event | Comment |
---|---|---|
PowerUIAgent | Called for battery level=84, externalConnected=0 | not in charge |
powerd | Battery capacity change posted(0x8004b). Capacity:75 Source:Batt | not in charge |
powerd | Battery capacity change posted(0xb000d). Capacity:13 Source:AC | in charge |
PowerUIAgent | Called for battery level=100, externalConnected=1 | in charge |
SpringBoard | battery info changed to (charging 100) with detail='100%', low power mode='0' | in charge |
symptomsd | Power: battery-percentage 100.00 battery-power-connected 1 battery-charging 0 battery-warn 0 battery-critical 0 battery-absolute-capacity-mAh 3006 battery-voltage-mV 4292 battery-current-capacity-% 100 battery-maximum-capacity-% 100 battery-design-capacity-mAh 2942 battery-time-remaining 0 battery-fully-charged 1 battery-temperature 2119 screen-brightness 0 battery-raw-current-capacity 3057 battery-raw-maximum-capacity 3057 presentDOD 0 | in charge (when not in charge: battery-power-connected 0 in the log) |
SpringBoard | (<_BCPowerSourceController: 0x3fca87a40>) Found power source: { "Battery Provides Time Remaining" = 1; "Current Capacity" = 14; "Cycle count" = 227; "Date of manufacture" = "2031-12-26 00:00:00 +0000"; "Is Charging" = 0; "Is Present" = 1; "LPM Active" = 0; "Max Capacity" = 100; Name = "InternalBattery-0"; "Optimized Battery Charging Engaged" = 0; "Play Charging Chime" = 0; "Power Source ID" = 3080291; "Power Source State" = "Battery Power"; "Raw External Connected" = 0; "Show Charging UI" = 0; "Time to Empty" = 0; "Time to Full Charge" = 0; "Transport Type" = Internal; "Trusted Battery Data" = { TrustedBatteryEnabled = 0; }; Type = InternalBattery; } | in charge |
Brightness
Process | Event |
---|---|
backboardd | brightness change:0.477416 reason:BrightnessSystemDidChange options:<private> |
Volume
Process | Event |
---|---|
SpringBoard | <MPVolumeControllerSystemDataSource: 0x43a250a20> AVSystemController volume changed to: 0.173750 | category: Audio/Video | capabilities: <private> | reason: ExplicitVolumeChange | silence: NO |
SpringBoard | SBVolumeControl -- volume increment -- button state: down |
SpringBoard | SBVolumeControl -- volume increment -- button state: up |
SpringBoard | SBVolumeControl -- volume decrement -- button state: down |
SpringBoard | SBVolumeControl -- volume decrement -- button state: up |
SpringBoard | (2)Volume Change by delta '-0.062500' was Accepted |
SpringBoard | SBSOSClawGestureObserver - button press noted: volumeUp down: NO active(NO): [] |
SpringBoard | (1)Volume Change by delta '-0.062500' was Denied; reason: Error Domain=com.apple.springboard.volumeControl.state Code=1 "SpringBoard can't change the volume because the device is locked, no app is being hosted on the lock screen, and no audio is playing anywhere." UserInfo={state=<SBVolumeControlState: 0x43a1c3800; activeVolumeCategoryName: Audio/Video; isAudioPlayingSomewhere: NO; isCallOrFaceTimeActive: NO; currentRouteHasVolumeControl: YES; isFullyMuted: NO; isLocked: YES; isHostingAppOnLockScreen: NO; isShowingLockScreenMediaControls: NO>, NSLocalizedDescription=SpringBoard can't change the volume because the device is locked, no app is being hosted on the lock screen, and no audio is playing anywhere.} |
Silent mode
Process | Event |
---|---|
SpringBoard | SBRingerControl activateRingerHUD: silent |
SpringBoard | SBRingerHUDViewController setRingerSilent: true |
backboardd | ringer state changed to:silent |
SpringBoard | SBRingerControl activateRingerHUD: tone |
SpringBoard | SBRingerHUDViewController setRingerSilent: false |
backboardd | ringer state changed to:tone1 |
and some new Date/time changes unified logs !
If you suspect date and time manipulation, have a look at these logs:
Process | Event |
---|---|
dasd | Time change: Clock shifted by 345560.980357 secs |
mobiletimerd | <MTPowerAssertion: 0xa082c6020> Releasing power assert for: SignificantTimeChange |
mobiletimerd | <MTTimeListener: 0xa08099d60> timeZone: Europe/Zurich (UTC+1) offset 3600 |
For the first log, you might see a minus sign (-) in front of the number if the time has been set in the past:
Process | Event |
---|---|
dasd | Time change: Clock shifted by -1901306.240476 secs |
6. Date range picker
If you want to speed up the export and conversion process of your logarchive, you can define a start and end date/time range:

By clicking on the blue buttons, a calendar will appear allowing you to set your desired date range:

However, please note that you must enter both a start and an end date, selecting only one is not allowed. If you try to do so, the following error message will appear:

Date range impact
Setting a date range is only recommended if your goal is to speed up the investigation and get a quick overview of the logarchive content. This is because the log stats command, which is used to generate statistics, does not support date filtering. As a result, using a date range will cause discrepancies in the forensic report file. If your investigation requires strict forensic standards, I strongly recommend performing a full extraction without a date range.
This will be explained in more detail in the section about the forensic report.
7. Use your own custom rules
This might be the feature I’m most proud of. I’m far from thinking I know every possible unified log entry. While I believe the “filtered” database already contains many useful entries, you may need additional logs for your specific investigation. Good news: you can import your own parsing rules, and these logs will be added to the filtered database! If you want to include your own rules, simply check the box “Enable custom rules (custom_rules.json)”:

How does it work ?
To apply your custom parsing rules, use the custom_rules.json file included in the package you downloaded. Do not rename this file, and make sure it stays in the same folder as the script. When you check the box, the script will automatically search and read the file. A pop-up will let you know whether your custom rules are valid or not.
How to build your rules
The template file included in the package contains a few example rules. These rules must be in JSON format and include the fields “process”, “like” (to filter the eventMessage field using keywords instead of the full message), and “label”. Please keep the following in mind:
You can filter logs using the “process” and “eventMessage” fields.
At least one “like” is required! A rule without any keyword is not valid. "like" arguments are run on the eventMessage field.
You can include as many “like” keywords as you want in your rule.
The “process” field is optional, you can build a rule based only on eventMessage ("like" argument". However, leaving out the process may slow down the parsing.
You can define your own label! You can use an existing one from the script or create a new one. The label is also optional and can be left empty.
The fields process, like, and label must all be written as text strings.
A few example of rules:
A complete and valid rule: {
"process": "routined",
"like": ["%Location%"],
"label": "My label"
},
A partial but valid rule: {
"like": ["%day%", "%connection%"],
"label": "My second label"
},
Finally, a rule that only contains “like” keywords but still valid: {
"like": ["%will%", "%update%"]
},
When all the rules are valid, the script will show you the following pop-up as soon as you check the option:

If you enter an invalid rule, for example::
{
"process": "routined",
"label": "My label"
},
This rule doesn’t contain any “like”, so you will get an error message like this:

In the same way, if you try to check this box but the file custom_rules.json is not in the folder, you will get the following pop-up:

Custom logs will be added to the Filtered database with your own label and will appear in the final statistics.
8. Export the filtered database in CSV
Aware that the CSV format can be very useful, the filtered database containing all the filtered logs can be exported to a CSV file. If such an export is useful to you, simply check the box “Export filtered logs to CSV”.

9. Forensic report
The forensic report generated at the end of the script is obviously essential, for all the reasons we know as digital investigators. This report is automatically named “iOS Unified Logs - Parsing.txt” and is split into three sections. Each of these sections and their content are described below.
General information
The first section provides the following information:

In other words, this section reminds us who ran the parsing and for which case. It also shows which logarchive was analysed and the path where it was located. We can also see the name of the generated database and where it was saved.
The options selected by the user, such as enabling custom rules or exporting filtered logs to CSV, are also listed here.
At the end of the section, we get the start and end times of the parsing, as well as the total time (in seconds). The script then recalculates the MD5 and SHA1 of the original logarchive to confirm that it hasn’t been modified in any way during the process. As a reminder, when using my extraction tool, the logarchive is automatically hashed before the parsing begins.
If a custom_rules.json file was loaded, its content and its MD5 hash are also included in the report. This can be useful to keep a record of which rules were applied at the time of parsing.

Database
In the second section of the report, you’ll find some information about the databases. First, the general statistics of the complete database are displayed:

We can see the archive size, the timestamp of the earliest and most recent events, and the total number of events (in other words, the number of rows in the complete database). This is especially useful for cross-checking with the results given by the log stats command on the original logarchive.
Finally, the report shows the number of different “Boots”, that is, the number of unique bootUUID values, which can be important to know early in the investigation.
The report also includes detailed statistics by messageType, eventType, and by process, to help you perform a deeper quality check.

All of these numbers can be compared with the ones obtained directly from the logarchive (see next section!).
Finally, the last table in the report related to the database gives the counts of each label in the filtered database:

Logarchive
The logarchive statistics should now look familiar to you. If you use my extraction tool, these are already included in the report that comes with the logarchive. I also recently published a full article explaining this command in more detail.
In this case, the parsing report also includes the statistics from the parsed logarchive. This allows us to compare the total number of events, but also the breakdowns by messageType, eventType, and process. This is a great way to confirm that the conversion worked correctly.
Here are the statistics:

Let’s try to compare a few of these values.
First, we can see that the start and end dates match, that’s a good sign! Then, we notice that the number of statedump events (called stateEvent in the database) is the same in both the database and the logarchive: 20,937. That’s a strong start!
Next, we can check the “events” section. As mentioned earlier, the total number of events, 22,176,487, is the same in both outputs. The number of logEvent entries, 20,690,646, also matches. And so do the values for signpostEvent and lossEvent.
Overall, this is great. Here’s a summary table:
eventType | Database count | Logarchive count | Difference |
---|---|---|---|
logEvent | 20,690,646 | 20,690,646 | 0 |
activityCreateEvent | 1,073,045 | 1,073,045 | 0 |
signpostEvent | 385,957 | 385,957 | 0 |
stateEvent | 20,937 | 20,937 | 0 |
lossEvent | 5,395 | 5,395 | 0 |
timesyncEvent | 457 | 457 | 0 |
userActionEvent | 50 | 50 | 0 |
Total | 22,176,487 | 22,176,487 | 0 |
So, nothing special to report, everything shows that the conversion worked well! Let’s continue the same way with the messageType values:
log messageType | Database count | Logarchive count | Difference | Comment |
---|---|---|---|---|
Default | 18,645,254 | 19,031,211 | -385,957 | Number of signpostEvent, OK! |
(empty) | 1,485,841 | 0 | 1,485,841 | Explainable, OK! |
Info | 1,194,078 | 1,194,078 | 0 | OK! |
Error | 801,986 | 801,986 | 0 | OK! |
Debug | 27,365 | 27,365 | 0 | OK! |
Fault | 21,963 | 21,963 | 0 | OK |
As for the difference in the “Default” value, I refer you to my previous article that explains the known issue with the log stats command and this specific stat. So this difference is fully explainable and not a problem. However, we need to take a closer look at the number of “empty” messages: 1,485,841. This number is very close to the result of the following calculation:
activityCreateEvent + signpostEvent + userActionEvent + stateEvent + timesyncEvent, so in numbers:
1,073,045 + 385,957 + 50 + 20,937 + 457 = 1,480,446
And if we subtract 1,480,446 from 1,485,841, we get 5,395, which is exactly the number of lossEvent events! This confirms that all events with one of the following eventType values do not have a messageType, which actually makes sense:

In the end, this makes perfect sense, the eventType values associated with an empty messageType aren’t traditional log messages. Instead, they correspond to other system event types such as activityCreateEvent, timesyncEvent, signpostEvent, or lossEvent. What I find slightly confusing, though, is that this isn’t clearly mentioned when running the log stats command. For instance, here’s what you get when you generate statistics specifically for the activityCreateEvent type, as expected, all fields related to messageType are completely empty:

After reviewing the eventType and messageType statistics, we can now compare the process statistics. It’s important to note that the list of processes in the database report is hard-coded in the script. This means the tool always reports statistics for the same predefined set of processes, regardless of whether others might have generated more events. In contrast, the log stats command shows statistics for the top processes based on activity in the logarchive. So don’t be surprised if you see different process names between the two reports, that’s completely expected.
Let’s now compare the results for processes that appear in both the SQLite database and the logarchive statistics. Here’s a summary table of matching values:
Process | Database count | Logarchive count | Difference | Comment |
---|---|---|---|---|
kernel | 3,751,137 | 3,756,454 | -5317 | To explain! |
dasd | 2,788,262 | 2,788,262 | 0 | OK! |
locationd | 2,707,627 | 2,707,627 | 0 | OK! |
mDNSResponder | 1,241,148 | 1,241,148 | 0 | OK! |
wifid | 1,007,808 | 1,007,808 | 0 | OK! |
powerd | 891,118 | 891,118 | 0 | OK! |
identityservicesd | 852,941 | 852,941 | 0 | OK! |
WirelessRadioManagerd | 683,481 | 683,481 | 0 | OK! |
symptomsd | 483,935 | 483,935 | 0 | OK! |
SpringBoard | 438,046 | 438,048 | -2 | to explain! |
contextstored | 374,434 | 374,434 | 0 | OK! |
CommCenter | 352,112 | 352,112 | 0 | OK! |
UserEventAgent | 316,497 | 316,497 | 0 | OK! |
The first good news is that most of the statistics based on processes, both from the full database and the logarchive, match! Once again, this shows that the script is working correctly and does not alter the data, which is absolutely essential from a forensic point of view. But let’s now look at something more serious. The database contains 5,317 fewer events for the kernel process and 2 fewer events for SpringBoard compared to the logarchive. Honestly, my first thought was that this had to be related to… the lossEvent entries! I couldn’t really think of any other explanation.
So, the first thing I did was print the exact statistics for these two processes in my terminal using the following commands:
log stats --archive "Final.logarchive" --process "kernel"
log stats --archive "Final.logarchive" --process "SpringBoard"
Kernel results
The statistics obtained for the kernel process are as follows:

No comment… This time we get the correct number, meaning the same number as in my database. And again, we see a difference of exactly 5,317 compared to the output of the log stats command when applied to the entire logarchive. It seems that the log stats command does not return consistent statistics depending on the parameters used…
On the one hand, this reassures me about my tool, once again, it shows that it works correctly. But honestly, I’m starting to lose my trust in the log stats command…
SpringBoard results
No surprise here, we observe the same pattern as with the kernel process, and we get the same number as the one calculated in the database.

So how can we explain these differences? I still strongly believe that they come from the lossEvent logs. When I generate the specific statistics for those events, here is what I get:

Very interesting! We can clearly see that one process generated 5,317 lossEvent logs, and another one generated 2. I won’t go any deeper in the analysis for now, but I feel confident saying that the differences are now explained, and that they are caused by the lossEvent entries!
Date range discrepancies
I briefly mentioned earlier that applying a date range can lead to issues, and what you’ll observe are clear inconsistencies in the statistics I shared above. Indeed, the log stats command cannot be used with a date range, which means it cannot match the behavior of log show, which can accept a date range as a parameter. To keep this limitation in mind, the following warning will be included at the top of the forensic report:

If you apply a date range, none of the comparison statistics between the logarchive and the full database will be accurate. Using a date range is a great option to speed up the parsing process at first, but if the logs turn out to be a key piece in your investigation, you should definitely run a full parsing without any date restriction.
When a date range is applied, this information is clearly recorded in the forensic report and also displayed in the Terminal output.


Terminal
Forensic report
10. Performance
I believe the performance of the tool is quite reasonable, at least on my Mac Mini with 8GB of RAM.
Here’s a breakdown of the main steps and the time they usually take (based on parsing a logarchive with 22 million events):
Exporting logs to JSON: This step depends entirely on Apple’s log show command and takes around 6 minutes. Unfortunately, we don’t have any control on this part.
Converting JSON to SQLite: Parsing the JSON file and inserting the events into the full database takes around 3 to 5 minutes.
Filtering the logs: Extracting the relevant logs and building the filtered database usually takes another 5 to 6 minutes.
Post-processing: This includes computing statistics, generating the forensic report, computing hashes, etc. This last phase adds another 2 to 3 minutes.
In total, the entire procedure, from exporting the logs to closing the tool, takes between 880 and 990 seconds, or roughly 15 to 20 minutes.

11. Download
The script can be downloaded here:
The ZIP package contains two folders:
“Acquisition” for the extraction tool
“Parsing” for the parser, along with the custom_rules.json template file
You’ll also find a README and a requirements file listing the few dependencies needed for the tool to run properly. I have also tested the tool on an old 2012 Mac running macOS 10.15 and it worked, so you shouldn’t have any issues running it.
I hope this tool will save you time and help you focus on what really matters in your investigation.
📚 To access all my SQL Quesries follow this link !
Happy parsing!
Lionel Notari – ios-unifiedlogs.com