diff --git a/README.md b/README.md
index 0bffee0a..c4cfb2f4 100644
--- a/README.md
+++ b/README.md
@@ -13,6 +13,7 @@ Special thanks to [@0vercl0k](https://twitter.com/0vercl0k) for the inspiration.
## Releases
+* v0.7 -- Frida, C++ demangling, context menu, function prefixing, tweaks, bugfixes.
* v0.6 -- Intel pintool, cyclomatic complexity, batch load, bugfixes.
* v0.5 -- Search, IDA 7 support, many improvements, stability.
* v0.4 -- Most compute is now asynchronous, bugfixes.
@@ -29,7 +30,7 @@ Install Lighthouse into the IDA plugins folder.
- On MacOS, the folder is at `/Applications/IDA\ Pro\ 6.8/idaq.app/Contents/MacOS/plugins`
- On Linux, the folder may be at `/opt/IDA/plugins/`
-The plugin is platform agnostic, but has only been tested on Windows for IDA 6.8 --> 7.0
+The plugin is compatible with IDA Pro 6.8 --> 7.0 on Windows, MacOS, and Linux.
## Usage
@@ -67,6 +68,16 @@ The Coverage Overview is a dockable widget that provides a function level view o
This table can be sorted by column, and entries can be double clicked to jump to their corresponding disassembly.
+## Context Menu
+
+Right clicking the table in the Coverage Overview will produce a context menu with a few basic amenities.
+
+
+
+
+
+These actions can be used to quickly manipulate or interact with entries in the table.
+
## Coverage Composition
Building relationships between multiple sets of coverage data often distills deeper meaning than their individual parts. The shell at the bottom of the [Coverage Overview](#coverage-overview) provides an interactive means of constructing these relationships.
@@ -134,7 +145,7 @@ Loaded coverage data and user constructed compositions can be selected or delete
Before using Lighthouse, one will need to collect code coverage data for their target binary / application.
-The examples below demonstrate how one can use [DynamoRIO](http://www.dynamorio.org) or [Intel Pin](https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool) to collect Lighthouse compatible coverage agaainst a target. The `.log` files produced by these instrumentation tools can be loaded directly into Lighthouse.
+The examples below demonstrate how one can use [DynamoRIO](http://www.dynamorio.org), [Intel Pin](https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool) or [Frida](https://www.frida.re) to collect Lighthouse compatible coverage against a target. The `.log` files produced by these instrumentation tools can be loaded directly into Lighthouse.
## DynamoRIO
@@ -156,7 +167,17 @@ Example usage:
pin.exe -t CodeCoverage64.dll -- boombox.exe
```
-For convenience, binaries for the Windows pintool can be found on the [releases](https://github.com/gaasedelen/lighthouse/releases/tag/v0.6.0) page. MacOS and Linux users need to compile the pintool themselves following the [instructions](coverage/pin#compilation) included with the pintool for their respective platforms.
+For convenience, binaries for the Windows pintool can be found on the [releases](https://github.com/gaasedelen/lighthouse/releases/tag/v0.7.0) page. MacOS and Linux users need to compile the pintool themselves following the [instructions](coverage/pin#compilation) included with the pintool for their respective platforms.
+
+## Frida (Experimental)
+
+Lighthouse offers limited support for Frida based code coverage via a custom [instrumentation script](coverage/frida) contributed by [yrp](https://twitter.com/yrp604).
+
+Example usage:
+
+```
+sudo python frida-drcov.py bb-bench
+```
# Future Work
@@ -166,7 +187,7 @@ Time and motivation permitting, future work may include:
* ~~Multifile/coverage support~~
* Profiling based heatmaps/painting
* Coverage & Profiling Treemaps
-* Additional coverage sources, trace formats, etc
+* ~~Additional coverage sources, trace formats, etc~~
* Improved Pseudocode painting
I welcome external contributions, issues, and feature requests.
diff --git a/coverage/frida/README.md b/coverage/frida/README.md
new file mode 100644
index 00000000..1374cd4d
--- /dev/null
+++ b/coverage/frida/README.md
@@ -0,0 +1,72 @@
+# frida-drcov.py
+
+In this folder you will find the code coverage collection script `frida-drcov.py` that run ontop of the [Frida](https://www.frida.re/) DBI toolkit. This script will produce code coverage (using Frida) in a log format compatible with [Lighthouse](https://github.com/gaasedelen/lighthouse).
+
+Frida is best supported on mobile platforms such as iOS or Android, claiming some support for Windows, MacOS, Linux, and QNX. Practically speaking, `frida-drcov.py` should only be used for collecting coverage data on mobile applications.
+
+This script is labeled only as a prototype.
+
+## Install
+
+To use `frida-drcov.py`, you must have [Frida](https://www.frida.re/) installed. This can be done via python's `pip`:
+
+```
+sudo pip install frida
+```
+
+## Usage
+
+Once frida is installed, the `frida-drcov.py` script in this repo can be used to collect coverage against a running process as demonstrated below. By default, the code coverage data will be written to the file `frida-drcov.log` at the end of execution.
+
+```
+python frida-drcov.py
+```
+
+Here is an example of us instrumenting the running process `bb-bench`.
+
+```
+$ sudo python frida-drcov.py bb-bench
+[+] Got module info
+Starting to stalk threads...
+Stalking thread 775
+Done stalking threads.
+[*] Now collecting info, control-D to terminate....
+[*] Detaching, this might take a second... # ^d
+[+] Detached. Got 320 basic blocks.
+[*] Formatting coverage and saving...
+[!] Done
+$ ls -lh frida-cov.log # this is the file you will load into lighthouse
+-rw-r--r-- 1 root staff 7.2K 21 Oct 11:58 frida-cov.log
+```
+
+Using the `-o` flag, one can specify a custom name/location for the coverage log file:
+
+```
+python frida-drcov.py -o more-coverage.log foo
+```
+
+## Module Whitelisting
+
+One can whitelist specific modules inside the target process. Say you have binary `foo` which imports the libraries `libfoo`, `libbar`, and `libbaz`. Using the `-w` flag (whitelist) on the command line, we can explicitly target modules of interest:
+
+```
+$ python frida-drcov.py -w libfoo -w libbaz foo
+```
+
+This will reduce the amount of information collected and improve performance. If no `-w` arguments are supplied, `frida-drcov.py` will trace all loaded images.
+
+## Thread Targeting
+
+On multi-threaded applications, tracing all threads can impose significant overhead. For these cases you can filter coverage collection based on thread id if you only care about specific threads.
+
+In the following example, we target thread id `543`, and `678` running in the process named `foo`.
+
+```
+python frida-drcov.py -t 543 -t 678 foo
+```
+
+Without the `-t` flag, all threads that exist in the process at the time of attach will be traced.
+
+# Authors
+
+* yrp ([@yrp604](https://twitter.com/yrp604))
diff --git a/coverage/frida/frida-drcov.py b/coverage/frida/frida-drcov.py
new file mode 100755
index 00000000..8f0e37f6
--- /dev/null
+++ b/coverage/frida/frida-drcov.py
@@ -0,0 +1,322 @@
+#!/usr/bin/env python
+from __future__ import print_function
+
+import argparse
+import json
+import sys
+
+import frida
+
+"""
+Frida BB tracer that outputs in DRcov format.
+
+Frida script is responsible for:
+- Getting and sending the process module map initially
+- Getting the code execution events
+- Parsing the raw event into a GumCompileEvent
+- Converting from GumCompileEvent to DRcov block
+- Sending a list of DRcov blocks to python
+
+Python side is responsible for:
+- Attaching and detaching from the target process
+- Removing duplicate DRcov blocks
+- Formatting module map and blocks
+- Writing the output file
+"""
+
+# Our frida script, takes two string arguments to embed
+# 1. whitelist of modules, in the form "['module_a', 'module_b']" or "['all']"
+# 2. threads to trace, in the form "[345, 765]" or "['all']"
+js = """
+"use strict";
+
+var whitelist = %s;
+var threadlist = %s;
+
+// Get the module map
+function make_maps() {
+ var maps = Process.enumerateModulesSync();
+ var i = 0;
+ // We need to add the module id
+ maps.map(function(o) { o.id = i++; });
+ // .. and the module end point
+ maps.map(function(o) { o.end = o.base.add(o.size); });
+
+ return maps;
+}
+
+var maps = make_maps()
+
+send({'map': maps});
+
+// We want to use frida's ModuleMap to create DRcov events, however frida's
+// Module object doesn't have the 'id' we added above. To get around this,
+// we'll create a mapping from path -> id, and have the ModuleMap look up the
+// path. While the ModuleMap does contain the base address, if we cache it
+// here, we can simply look up the path rather than the entire Module object.
+var module_ids = {};
+
+maps.map(function (e) {
+ module_ids[e.path] = {id: e.id, start: e.base};
+});
+
+var filtered_maps = new ModuleMap(function (m) {
+ if (whitelist.indexOf('all') >= 0) { return true; }
+
+ return whitelist.indexOf(m.name) >= 0;
+});
+
+// This function takes a list of GumCompileEvents and converts it into a DRcov
+// entry. Note that we'll get duplicated events when two traced threads
+// execute the same code, but this will be handled by the python side.
+function drcov_bbs(bbs, fmaps, path_ids) {
+ // We're going to use send(..., data) so we need an array buffer to send
+ // our results back with. Let's go ahead and alloc the max possible
+ // reply size
+
+ /*
+ // Data structure for the coverage info itself
+ typedef struct _bb_entry_t {
+ uint start; // offset of bb start from the image base
+ ushort size;
+ ushort mod_id;
+ } bb_entry_t;
+ */
+
+ var entry_sz = 8;
+
+ var bb = new ArrayBuffer(entry_sz * bbs.length);
+
+ var num_entries = 0;
+
+ for (var i = 0; i < bbs.length; ++i) {
+ var e = bbs[i];
+
+ var start = e[0];
+ var end = e[1];
+
+ var path = fmaps.findPath(start);
+
+ if (path == null) { continue; }
+
+ var mod_info = path_ids[path];
+
+ var offset = start.sub(mod_info.start).toInt32();
+ var size = end.sub(start).toInt32();
+ var mod_id = mod_info.id;
+
+ // We're going to create two memory views into the array we alloc'd at
+ // the start.
+
+ // we want one u32 after all the other entries we've created
+ var x = new Uint32Array(bb, num_entries * entry_sz, 1);
+ x[0] = offset;
+
+ // we want two u16's offset after the 4 byte u32 above
+ var y = new Uint16Array(bb, num_entries * entry_sz + 4, 2);
+ y[0] = size;
+ y[1] = mod_id;
+
+ ++num_entries;
+ }
+
+ // We can save some space here, rather than sending the entire array back,
+ // we can create a new view into the already allocated memory, and just
+ // send back that linear chunk.
+ return new Uint8Array(bb, 0, num_entries * entry_sz);
+}
+// Punt on self modifying code -- should improve speed and lighthouse will
+// barf on it anyways
+Stalker.trustThreshold = 0;
+
+console.log('Starting to stalk threads...');
+
+// Note, we will miss any bbs hit by threads that are created after we've
+// attached
+Process.enumerateThreads({
+ onMatch: function (thread) {
+ if (threadlist.indexOf(thread.id) < 0 &&
+ threadlist.indexOf('all') < 0) {
+ // This is not the thread you're look for
+ return;
+ }
+
+ console.log('Stalking thread ' + thread.id + '.');
+
+ Stalker.follow(thread.id, {
+ events: {
+ compile: true
+ },
+ onReceive: function (event) {
+ var bb_events = Stalker.parse(event,
+ {stringify: false, annotate: false});
+ var bbs = drcov_bbs(bb_events, filtered_maps, module_ids);
+
+ // We're going to send a dummy message, the actual bb is in the
+ // data field. We're sending a dict to keep it consistent with
+ // the map. We're also creating the drcov event in javascript,
+ // so on the py recv side we can just blindly add it to a set.
+ send({bbs: 1}, bbs);
+ }
+ });
+ },
+ onComplete: function () { console.log('Done stalking threads.'); }
+});
+"""
+
+# These are global so we can easily access them from the frida callbacks
+# It's important that bbs is a set, as we're going to depend on it's uniquing
+# behavior for deduplication
+modules = []
+bbs = set([])
+
+# This converts the object frida sends which has string addresses into
+# a python dict
+def populate_modules(image_list):
+ global modules
+
+ for image in image_list:
+ idx = image['id']
+ path = image['path']
+ base = int(image['base'], 0)
+ end = int(image['end'], 0)
+ size = image['size']
+
+ m = {
+ 'id': idx,
+ 'path': path,
+ 'base': base,
+ 'end': end,
+ 'size': size}
+
+ modules.append(m)
+
+ print('[+] Got module info.')
+
+# called when we get coverage data from frida
+def populate_bbs(data):
+ global bbs
+
+ # we know every drcov block is 8 bytes, so lets just blindly slice and
+ # insert. This will dedup for us.
+ block_sz = 8
+ for i in range(0, len(data), block_sz):
+ bbs.add(data[i:i+block_sz])
+
+# take the module dict and format it as a drcov logfile header
+def create_header(mods):
+ header = ''
+ header += 'DRCOV VERSION: 2\n'
+ header += 'DRCOV FLAVOR: frida\n'
+ header += 'Module Table: version 2, count %d\n' % len(mods)
+ header += 'Columns: id, base, end, entry, checksum, timestamp, path\n'
+
+ entries = []
+
+ for m in mods:
+ # drcov: id, base, end, entry, checksum, timestamp, path
+ # frida doesnt give us entry, checksum, or timestamp
+ # luckily, I don't think we need them.
+ entry = '%3d, %#016x, %#016x, %#016x, %#08x, %#08x, %s' % (
+ m['id'], m['base'], m['end'], 0, 0, 0, m['path'])
+
+ entries.append(entry)
+
+ header_modules = '\n'.join(entries)
+
+ return header + header_modules + '\n'
+
+# take the recv'd basic blocks, finish the header, and append the coverage
+def create_coverage(data):
+ bb_header = 'BB Table: %d bbs\n' % len(data)
+ return bb_header + ''.join(data)
+
+def on_message(msg, data):
+ #print(msg)
+ pay = msg['payload']
+ if 'map' in pay:
+ maps = pay['map']
+ populate_modules(maps)
+ else:
+ populate_bbs(data)
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument('target',
+ help='target process name or pid',
+ default='-1')
+ parser.add_argument('-o', '--outfile',
+ help='coverage file',
+ default='frida-cov.log')
+ parser.add_argument('-w', '--whitelist-modules',
+ help='module to trace, may be specified multiple times [all]',
+ action='append', default=[])
+ parser.add_argument('-t', '--thread-id',
+ help='threads to trace, may be specified multiple times [all]',
+ action='append', type=int, default=[])
+ parser.add_argument('-D', '--device',
+ help='select a device by id [local]',
+ default='local')
+
+ args = parser.parse_args()
+
+ device = frida.get_device(args.device)
+
+ target = -1
+ for p in device.enumerate_processes():
+ if args.target in [str(p.pid), p.name]:
+ if target == -1:
+ target = p.pid
+ else:
+ print('[-] Warning: multiple processes on device match '
+ '\'%s\', using pid: %d' % (args.target, target))
+
+ if target == -1:
+ print('[-] Error: could not find process matching '
+ '\'%s\' on device \'%s\'' % (args.target, device.id))
+ sys.exit(1)
+
+ whitelist_modules = ['all']
+ if len(args.whitelist_modules):
+ whitelist_modules = args.whitelist_modules
+
+ threadlist = ['all']
+ if len(args.thread_id):
+ threadlist = args.thread_id
+
+ json_whitelist_modules = json.dumps(whitelist_modules)
+ json_threadlist = json.dumps(threadlist)
+
+ print('[*] Attaching to pid \'%d\' on device \'%s\'...' %
+ (target, device.id))
+
+ session = device.attach(target)
+ print('[+] Attached. Loading script...')
+
+ script = session.create_script(js % (json_whitelist_modules, json_threadlist))
+ script.on('message', on_message)
+ script.load()
+
+ print('[*] Now collecting info, control-D to terminate....')
+
+ sys.stdin.read()
+
+ print('[*] Detaching, this might take a second...')
+ session.detach()
+
+ print('[+] Detached. Got %d basic blocks.' % len(bbs))
+ print('[*] Formatting coverage and saving...')
+
+ header = create_header(modules)
+ body = create_coverage(bbs)
+
+ with open(args.outfile, 'wb') as h:
+ h.write(header)
+ h.write(body)
+
+ print('[!] Done')
+
+ sys.exit(0)
+
+if __name__ == '__main__':
+ main()
diff --git a/coverage/pin/CodeCoverage.cpp b/coverage/pin/CodeCoverage.cpp
index 740ab710..71e94ee3 100644
--- a/coverage/pin/CodeCoverage.cpp
+++ b/coverage/pin/CodeCoverage.cpp
@@ -134,7 +134,7 @@ static VOID OnImageLoad(IMG img, VOID* v)
ADDRINT low = IMG_LowAddress(img);
ADDRINT high = IMG_HighAddress(img);
- printf("Loaded image: 0x%.16lx:0x%.16lx -> %s\n", low, high, img_name.c_str());
+ printf("Loaded image: %p:%p -> %s\n", (void *)low, (void *)high, img_name.c_str());
// Save the loaded image with its original full name/path.
PIN_GetLock(&context.m_loaded_images_lock, 1);
@@ -161,7 +161,7 @@ static VOID OnImageUnload(IMG img, VOID* v)
}
// Basic block hit event handler.
-static VOID OnBasicBlockHit(THREADID tid, ADDRINT addr, UINT32 size, VOID* v)
+static VOID PIN_FAST_ANALYSIS_CALL OnBasicBlockHit(THREADID tid, ADDRINT addr, UINT32 size, VOID* v)
{
auto& context = *reinterpret_cast(v);
ThreadData* data = context.GetThreadLocalData(tid);
@@ -184,6 +184,7 @@ static VOID OnTrace(TRACE trace, VOID* v)
for (; BBL_Valid(bbl); bbl = BBL_Next(bbl)) {
addr = BBL_Address(bbl);
BBL_InsertCall(bbl, IPOINT_ANYWHERE, (AFUNPTR)OnBasicBlockHit,
+ IARG_FAST_ANALYSIS_CALL,
IARG_THREAD_ID,
IARG_ADDRINT, addr,
IARG_UINT32, BBL_Size(bbl),
@@ -204,8 +205,8 @@ static VOID OnFini(INT32 code, VOID* v)
// We don't supply entry, checksum and, timestamp.
for (unsigned i = 0; i < context.m_loaded_images.size(); i++) {
const auto& image = context.m_loaded_images[i];
- context.m_trace->write_string("%2u, 0x%.16llx, 0x%.16llx, 0x0000000000000000, 0x00000000, 0x00000000, %s\n",
- i, image.low_, image.high_, image.name_.c_str());
+ context.m_trace->write_string("%2u, %p, %p, 0x0000000000000000, 0x00000000, 0x00000000, %s\n",
+ i, (void *)image.low_, (void *)image.high_, image.name_.c_str());
}
// Add non terminated threads to the list of terminated threads.
@@ -239,8 +240,8 @@ static VOID OnFini(INT32 code, VOID* v)
if (it == context.m_loaded_images.end())
continue;
- tmp.id = std::distance(context.m_loaded_images.begin(), it);
- tmp.start = address - it->low_;
+ tmp.id = (uint16_t)std::distance(context.m_loaded_images.begin(), it);
+ tmp.start = (uint32_t)(address - it->low_);
tmp.size = data->m_block_size[address];
context.m_trace->write_binary(&tmp, sizeof(tmp));
diff --git a/coverage/pin/README.md b/coverage/pin/README.md
index ee116b2c..489818d2 100644
--- a/coverage/pin/README.md
+++ b/coverage/pin/README.md
@@ -15,8 +15,11 @@ Follow the build instructions below for your respective platform.
On MacOS or Liunux, one can compile the pintool using the following commands.
```
-cd ~/lighthouse/coverage/pin # Location of this repo / pintool source
-export PIN_ROOT=~/pin # Location where you extracted Pin
+# Location of this repo / pintool source
+cd ~/lighthouse/coverage/pin
+
+# Location where you extracted Pin
+export PIN_ROOT=~/pin
export PATH=$PATH:$PIN_ROOT
make
```
@@ -36,8 +39,12 @@ Launch a command prompt and build the pintool with the following commands.
```
"C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x86
-cd C:\Users\user\lighthouse\coverage\pin # Location of this repo / pintool source
-set PIN_ROOT=C:\pin # Location where you extracted Pin
+
+REM Location of this repo / pintool source
+cd C:\Users\user\lighthouse\coverage\pin
+
+REM Location where you extracted Pin
+set PIN_ROOT=C:\pin
set PATH=%PATH%;%PIN_ROOT%
build-x86.bat
```
@@ -46,8 +53,12 @@ build-x86.bat
```
"C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat" x86_amd64
-cd C:\Users\user\lighthouse\coverage\pin # Location of this repo / pintool source
-set PIN_ROOT=C:\pin # Location where you extracted Pin
+
+REM Location of this repo / pintool source
+cd C:\Users\user\lighthouse\coverage\pin
+
+REM Location where you extracted Pin
+set PIN_ROOT=C:\pin
set PATH=%PATH%;%PIN_ROOT%
build-x64.bat
```
@@ -57,7 +68,7 @@ The resulting binaries will be labaled based on their architecture (eg, 64 is th
* CodeCoverage.dll
* CodeCoverage64.dll
-Compiling a pintool on Windows can be more arduous. Because of this, we have provided compiled binaries for Windows on the [releases](https://github.com/gaasedelen/lighthouse/releases/tag/v0.6.0) page.
+Compiling a pintool on Windows can be more arduous. Because of this, we have provided compiled binaries for Windows on the [releases](https://github.com/gaasedelen/lighthouse/releases/tag/v0.7.0) page.
# Usage
diff --git a/dev_scripts/flip_python.bat b/dev_scripts/flip_python.bat
deleted file mode 100644
index 8e514a91..00000000
--- a/dev_scripts/flip_python.bat
+++ /dev/null
@@ -1,8 +0,0 @@
-
-if exist C:\Python27_32 (
- MOVE C:\Python27 C:\Python27_64
- MOVE C:\Python27_32 C:\Python27
-) else (
- MOVE C:\Python27 C:\Python27_32
- MOVE C:\Python27_64 C:\Python27
-)
\ No newline at end of file
diff --git a/dev_scripts/reload_IDA_7.bat b/dev_scripts/reload_IDA_7.bat
index 11219830..b8b6845f 100644
--- a/dev_scripts/reload_IDA_7.bat
+++ b/dev_scripts/reload_IDA_7.bat
@@ -5,13 +5,13 @@ REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
-del /F /Q "C:\tools\disassemblers\IDA 7.0\plugins\*lighthouse_plugin.py"
-rmdir "C:\tools\disassemblers\IDA 7.0\plugins\lighthouse" /s /q
+del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\*lighthouse_plugin.py"
+rmdir "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
-xcopy /s/y "..\plugin\*" "C:\tools\disassemblers\IDA 7.0\plugins\"
-del /F /Q "C:\tools\disassemblers\IDA 7.0\plugins\.#lighthouse_plugin.py"
+xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\"
+del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\.#lighthouse_plugin.py"
-REM - Relaunch two IDA sessions
+REM - Launch a new IDA session
start "" "C:\tools\disassemblers\IDA 7.0\ida64.exe" "..\..\testcase\boombox7.i64"
diff --git a/dev_scripts/reload_IDA_7_ida.bat b/dev_scripts/reload_IDA_7_ida.bat
index 0904ead7..160b5dd3 100644
--- a/dev_scripts/reload_IDA_7_ida.bat
+++ b/dev_scripts/reload_IDA_7_ida.bat
@@ -5,13 +5,13 @@ REM - Purge old lighthouse log files
del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\lighthouse_logs\*"
REM - Delete the old plugin bits
-del /F /Q "C:\tools\disassemblers\IDA 7.0\plugins\*lighthouse_plugin.py"
-rmdir "C:\tools\disassemblers\IDA 7.0\plugins\lighthouse" /s /q
+del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\*lighthouse_plugin.py"
+rmdir "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\lighthouse" /s /q
REM - Copy over the new plugin bits
-xcopy /s/y "..\plugin\*" "C:\tools\disassemblers\IDA 7.0\plugins\"
-del /F /Q "C:\tools\disassemblers\IDA 7.0\plugins\.#lighthouse_plugin.py"
+xcopy /s/y "..\plugin\*" "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\"
+del /F /Q "C:\Users\user\AppData\Roaming\Hex-Rays\IDA Pro\plugins\.#lighthouse_plugin.py"
-REM - Relaunch two IDA sessions
+REM - Launch a new IDA session
start "" "C:\tools\disassemblers\IDA 7.0\ida.exe" "..\..\testcase\idaq7.idb"
diff --git a/plugin/lighthouse/composer/shell.py b/plugin/lighthouse/composer/shell.py
index 020f9d5b..f72fa578 100644
--- a/plugin/lighthouse/composer/shell.py
+++ b/plugin/lighthouse/composer/shell.py
@@ -29,6 +29,10 @@ def __init__(self, director, model, table=None):
self._model = model
self._table = table
+ # command / input
+ self._search_text = ""
+ self._command_timer = QtCore.QTimer()
+
# the last known user AST
self._last_ast = None
@@ -88,7 +92,7 @@ def _ui_init_shell(self):
# configure the shell background & default text color
palette = self._line.palette()
- palette.setColor(QtGui.QPalette.Base, self._palette.composer_bg)
+ palette.setColor(QtGui.QPalette.Base, self._palette.overview_bg)
palette.setColor(QtGui.QPalette.Text, self._palette.composer_fg)
palette.setColor(QtGui.QPalette.WindowText, self._palette.composer_fg)
self._line.setPalette(palette)
@@ -299,9 +303,7 @@ def _execute_search(self, text):
"""
Execute the search semantics.
"""
-
- # the given text is a real search query, apply it as a filter now
- self._model.filter_string(text[1:])
+ self._search_text = text[1:]
#
# if the user input is only "/" (starting to type something), hint
@@ -312,15 +314,48 @@ def _execute_search(self, text):
self._line_label.setText("Search")
return
+ #
+ # stop an existing command timer if there is one running. we are about
+ # to schedule a new one or execute inline. so the old/deferred command
+ # is no longer needed.
+ #
+
+ self._command_timer.stop()
+
+ #
+ # if the functions list is HUGE, we want to defer the filtering until
+ # we think the user has stopped typing as each pass may take awhile
+ # to compute (while blocking the main thread...)
+ #
+
+ if self._director.metadata.is_big():
+ self._command_timer = singleshot(1000, self._execute_search_internal)
+ self._command_timer.start()
+
+ #
+ # the database is not *massive*, let's execute the search immediately
+ #
+
+ else:
+ self._execute_search_internal()
+
+ # done
+ return
+
+ def _execute_search_internal(self):
+ """
+ Execute the actual search filtering & coverage metrics.
+ """
+
+ # the given text is a real search query, apply it as a filter now
+ self._model.filter_string(self._search_text)
+
# compute coverage % of the visible (filtered) results
percent = self._model.get_modeled_coverage_percent()
# show the coverage % of the search results in the shell label
self._line_label.setText("%1.2f%%" % percent)
- # done
- return
-
def _highlight_search(self):
"""
Syntax highlight a search query.
@@ -399,10 +434,15 @@ def _compute_jump(self, text):
# the user string did not translate to a parsable hex number (address)
# or the function it falls within could not be found in the director.
#
- # attempt to convert the user input from a function name eg
- # 'sub_1400016F0' to a function address validated by the director
+ # attempt to convert the user input from a function name, eg 'main',
+ # or 'sub_1400016F0' to a function address validated by the director.
#
+ # special case to make 'sub_*' prefixed user inputs case insensitive
+ if text.lower().startswith("sub_"):
+ text = "sub_" + text[4:].upper()
+
+ # look up the text function name within the director's metadata
function_metadata = self._director.metadata.get_function_by_name(text)
if function_metadata:
return function_metadata.address
@@ -547,14 +587,14 @@ def _accept_composition(self):
# composition name
#
- coverage_name = idaapi.askstr(
- 0,
- str("COMP_%s" % self.text),
- "Save composition as..."
+ ok, coverage_name = prompt_string(
+ "Composition Name:",
+ "Please enter a name for this composition",
+ "COMP_%s" % self.text
)
# the user did not enter a coverage name or hit cancel - abort the save
- if not coverage_name:
+ if not (ok and coverage_name):
return
#
diff --git a/plugin/lighthouse/coverage.py b/plugin/lighthouse/coverage.py
index e5b5249d..dd2fdc70 100644
--- a/plugin/lighthouse/coverage.py
+++ b/plugin/lighthouse/coverage.py
@@ -124,6 +124,7 @@ def __init__(self, data, palette):
self.nodes = {}
self.functions = {}
+ self.instruction_percent = 0.0
#
# we instantiate a single weakref of ourself (the DatbaseMapping
@@ -151,23 +152,6 @@ def coverage(self):
"""
return self._hitmap.viewkeys()
- @property
- def instruction_percent(self):
- """
- The database coverage % by instructions executed in all defined functions.
- """
- num_funcs = len(self._metadata.functions)
-
- # avoid a zero division error
- if not num_funcs:
- return 0
-
- # sum all the function coverage %'s
- func_sum = sum(f.instruction_percent for f in self.functions.itervalues())
-
- # return the average function coverage % aka 'the database coverage %'
- return func_sum / num_funcs
-
#--------------------------------------------------------------------------
# Metadata Population
#--------------------------------------------------------------------------
@@ -179,10 +163,7 @@ def update_metadata(self, metadata, delta=None):
# install the new metadata
self._metadata = weakref.proxy(metadata)
-
- # unmap all the coverage affected by the metadata delta
- if delta:
- self._unmap_delta(delta)
+ self.unmap_all()
def refresh(self):
"""
@@ -214,6 +195,7 @@ def _finalize(self, dirty_nodes, dirty_functions):
"""
self._finalize_nodes(dirty_nodes)
self._finalize_functions(dirty_functions)
+ self._finalize_instruction_percent()
def _finalize_nodes(self, dirty_nodes):
"""
@@ -229,6 +211,23 @@ def _finalize_functions(self, dirty_functions):
for function_coverage in dirty_functions.itervalues():
function_coverage.finalize()
+ def _finalize_instruction_percent(self):
+ """
+ Finalize the database coverage % by instructions executed in all defined functions.
+ """
+
+ # sum all the instructions in the database metadata
+ total = sum(f.instruction_count for f in self._metadata.functions.itervalues())
+ if not total:
+ self.instruction_percent = 0.0
+ return
+
+ # sum all the instructions executed by the coverage
+ executed = sum(f.instructions_executed for f in self.functions.itervalues())
+
+ # return the average function coverage % aka 'the database coverage %'
+ self.instruction_percent = float(executed) / total
+
#--------------------------------------------------------------------------
# Data Operations
#--------------------------------------------------------------------------
diff --git a/plugin/lighthouse/director.py b/plugin/lighthouse/director.py
index ead5e3c4..8cb7e8bd 100644
--- a/plugin/lighthouse/director.py
+++ b/plugin/lighthouse/director.py
@@ -1,12 +1,13 @@
import time
import string
import logging
-import weakref
import threading
import collections
+import idaapi # TODO: remove in v0.8
+
from lighthouse.util import *
-from lighthouse.metadata import DatabaseMetadata, MetadataDelta
+from lighthouse.metadata import DatabaseMetadata, metadata_progress
from lighthouse.coverage import DatabaseCoverage
from lighthouse.composer.parser import *
@@ -46,7 +47,7 @@ def __init__(self, palette):
self._palette = palette
# database metadata cache
- self._database_metadata = DatabaseMetadata()
+ self.metadata = DatabaseMetadata()
# flag to suspend/resume the automatic coverage aggregation
self._aggregation_suspended = False
@@ -154,6 +155,7 @@ def __init__(self, palette):
#----------------------------------------------------------------------
self._ast_queue = Queue.Queue()
+ self._composition_lock = threading.Lock()
self._composition_cache = CompositionCache()
self._composition_worker = threading.Thread(
@@ -171,15 +173,19 @@ def __init__(self, palette):
# events or changes to the underlying data they consume.
#
# Callbacks provide a way for us to notify any interested parties
- # of these key events.
+ # of these key events. Below are lists of registered notification
+ # callbacks. see 'Callbacks' section below for more info.
#
- # lists of registered notification callbacks, see 'Callbacks' below
+ # coverage callbacks
self._coverage_switched_callbacks = []
self._coverage_modified_callbacks = []
self._coverage_created_callbacks = []
self._coverage_deleted_callbacks = []
+ # metadata callbacks
+ self._metadata_modified_callbacks = []
+
def terminate(self):
"""
Cleanup & terminate the director.
@@ -196,13 +202,6 @@ def terminate(self):
# Properties
#--------------------------------------------------------------------------
- @property
- def metadata(self):
- """
- The active database metadata cache.
- """
- return self._database_metadata
-
@property
def coverage(self):
"""
@@ -246,126 +245,61 @@ def coverage_switched(self, callback):
"""
Subscribe a callback for coverage switch events.
"""
- self._register_callback(self._coverage_switched_callbacks, callback)
+ register_callback(self._coverage_switched_callbacks, callback)
def _notify_coverage_switched(self):
"""
Notify listeners of a coverage switch event.
"""
- self._notify_callback(self._coverage_switched_callbacks)
+ notify_callback(self._coverage_switched_callbacks)
def coverage_modified(self, callback):
"""
Subscribe a callback for coverage modification events.
"""
- self._register_callback(self._coverage_modified_callbacks, callback)
+ register_callback(self._coverage_modified_callbacks, callback)
def _notify_coverage_modified(self):
"""
Notify listeners of a coverage modification event.
"""
- self._notify_callback(self._coverage_modified_callbacks)
+ notify_callback(self._coverage_modified_callbacks)
def coverage_created(self, callback):
"""
Subscribe a callback for coverage creation events.
"""
- self._register_callback(self._coverage_created_callbacks, callback)
+ register_callback(self._coverage_created_callbacks, callback)
def _notify_coverage_created(self):
"""
Notify listeners of a coverage creation event.
"""
- self._notify_callback(self._coverage_created_callbacks) # TODO: send list of names created?
+ notify_callback(self._coverage_created_callbacks) # TODO: send list of names created?
def coverage_deleted(self, callback):
"""
Subscribe a callback for coverage deletion events.
"""
- self._register_callback(self._coverage_deleted_callbacks, callback)
+ register_callback(self._coverage_deleted_callbacks, callback)
def _notify_coverage_deleted(self):
"""
Notify listeners of a coverage deletion event.
"""
- self._notify_callback(self._coverage_deleted_callbacks) # TODO: send list of names deleted?
+ notify_callback(self._coverage_deleted_callbacks) # TODO: send list of names deleted?
- def _register_callback(self, callback_list, callback):
+ def metadata_modified(self, callback):
"""
- Register a given callable (callback) to the given callback_list.
-
- Adapted from http://stackoverflow.com/a/21941670
+ Subscribe a callback for metadata modification events.
"""
+ register_callback(self._metadata_modified_callbacks, callback)
- # create a weakref callback to an object method
- try:
- callback_ref = weakref.ref(callback.__func__), weakref.ref(callback.__self__)
-
- # create a wweakref callback to a stand alone function
- except AttributeError:
- callback_ref = weakref.ref(callback), None
-
- # 'register' the callback
- callback_list.append(callback_ref)
-
- def _notify_callback(self, callback_list):
+ def _notify_metadata_modified(self):
"""
- Notify the given list of registered callbacks.
-
- The given list (callback_list) is a list of weakref'd callables
- registered through the _register_callback function. To notify the
- callbacks we simply loop through the list and call them.
-
- This routine self-heals by removing dead callbacks for deleted objects.
-
- Adapted from http://stackoverflow.com/a/21941670
+ Notify listeners of a metadata modification event.
"""
- cleanup = []
-
- #
- # loop through all the registered callbacks in the given callback_list,
- # notifying active callbacks, and removing dead ones.
- #
-
- for callback_ref in callback_list:
- callback, obj_ref = callback_ref[0](), callback_ref[1]
-
- #
- # if the callback is an instance method, deference the instance
- # (an object) first to check that it is still alive
- #
-
- if obj_ref:
- obj = obj_ref()
-
- # if the object instance is gone, mark this callback for cleanup
- if obj is None:
- cleanup.append(callback_ref)
- continue
-
- # call the object instance callback
- try:
- callback(obj)
-
- # assume a Qt cleanup/deletion occured
- except RuntimeError as e:
- cleanup.append(callback_ref)
- continue
-
- # if the callback is a static method...
- else:
-
- # if the static method is deleted, mark this callback for cleanup
- if callback is None:
- cleanup.append(callback_ref)
- continue
-
- # call the static callback
- callback(self)
-
- # remove the deleted callbacks
- for callback_ref in cleanup:
- callback_list.remove(callback_ref)
+ notify_callback(self._metadata_modified_callbacks)
#----------------------------------------------------------------------
# Batch Loading
@@ -508,16 +442,42 @@ def delete_coverage(self, coverage_name):
"""
Delete a database coverage object by name.
"""
- assert coverage_name in self.coverage_names
#
# if the delete request targets the currently active coverage, we want
- # to switch into a safer coverage to try and avoid any ill effects.
+ # to switch into a safer coverage set to try and avoid any ill effects.
#
- if self.coverage_name == coverage_name:
+ if coverage_name in [self.coverage_name, AGGREGATE]:
self.select_coverage(NEW_COMPOSITION)
+ #
+ # the user is trying to delete one of their own loaded/created coverages
+ #
+
+ if coverage_name in self.coverage_names:
+ self._delete_user_coverage(coverage_name)
+
+ #
+ # the user is trying to delete the aggregate coverage set, which simply
+ # means clears *all* loaded coverages
+ #
+
+ elif coverage_name == AGGREGATE:
+ self._delete_aggregate_coverage(coverage_name)
+
+ # unsupported / unknown coverage
+ else:
+ raise ValueError("Cannot delete %s, does not exist" % coverage_name)
+
+ # notify any listeners that we have deleted coverage
+ self._notify_coverage_deleted()
+
+ def _delete_user_coverage(self, coverage_name):
+ """
+ Delete a user created database coverage object by name.
+ """
+
# release the shorthand alias held by this coverage
self._release_shorthand_alias(coverage_name)
@@ -529,8 +489,21 @@ def delete_coverage(self, coverage_name):
if not self._aggregation_suspended:
self._refresh_aggregate()
- # notify any listeners that we have deleted coverage
- self._notify_coverage_deleted()
+ def _delete_aggregate_coverage(self, coverage_name):
+ """
+ Delete the aggregate set, effectiveely clearing all loaded covearge.
+ """
+
+ # loop through all the loaded coverage sets and release them
+ for coverage_name in self.coverage_names:
+ self._release_shorthand_alias(coverage_name)
+ self._database_coverage.pop(coverage_name)
+
+ # TODO: check if there's any references to the coverage aggregate...
+
+ # assign a new, blank aggregate set
+ self._special_coverage[AGGREGATE] = DatabaseCoverage(None, self._palette)
+ self._refresh_aggregate() # probably not needed
def get_coverage(self, name):
"""
@@ -693,8 +666,6 @@ def add_composition(self, composite_name, ast):
# evaluate the last AST into a coverage set
composite_coverage = self._evaluate_composition(ast)
- composite_coverage.update_metadata(self.metadata)
- composite_coverage.refresh() # TODO: hash refresh?
# save the evaluated coverage under the given name
self._update_coverage(composite_name, composite_coverage)
@@ -741,15 +712,14 @@ def _async_evaluate_ast(self):
# produce a single composite coverage object as described by the AST
composite_coverage = self._evaluate_composition(ast)
- # map the composited coverage data to the database metadata
- composite_coverage.update_metadata(self.metadata)
- composite_coverage.refresh()
-
# we always save the most recent composite to the hotshell entry
self._special_coverage[HOT_SHELL] = composite_coverage
+ #
# if the hotshell entry is the active coverage selection, notify
# listeners of its update
+ #
+
if self.coverage_name == HOT_SHELL:
self._notify_coverage_modified()
@@ -767,8 +737,37 @@ def _evaluate_composition(self, ast):
if isinstance(ast, TokenNull):
return self._NULL_COVERAGE
+ #
+ # the director's composition evaluation code (this function) is most
+ # generally called via the background caching evaluation thread known
+ # as self._composition_worker. But this function can also be called
+ # inline via the 'add_composition' function from a different thread
+ # (namely, the main thread)
+ #
+ # because of this, we must control access to the resources the AST
+ # evaluation code operates by restricting the code to one thread
+ # at a time.
+ #
+ # should we call _evaluate_composition from the context of the main
+ # IDA thread, it is important that we do so in a pseudo non-blocking
+ # such that we don't hang IDA. await_lock(...) will allow the Qt/IDA
+ # main thread to yield to other threads while waiting for the lock
+ #
+
+ await_lock(self._composition_lock)
+
# recursively evaluate the AST
- return self._evaluate_composition_recursive(ast)
+ composite_coverage = self._evaluate_composition_recursive(ast)
+
+ # map the composited coverage data to the database metadata
+ composite_coverage.update_metadata(self.metadata)
+ composite_coverage.refresh() # TODO: hash refresh?
+
+ # done operating on shared data (coverage), release the lock
+ self._composition_lock.release()
+
+ # return the evaluated composition
+ return composite_coverage
def _evaluate_composition_recursive(self, node):
"""
@@ -944,40 +943,60 @@ def refresh(self):
logger.debug("Refreshing the CoverageDirector")
# (re)build our metadata cache of the underlying database
- delta = self._refresh_database_metadata()
+ future = self.refresh_metadata(metadata_progress, True)
+ await_future(future)
# (re)map each set of loaded coverage data to the database
- self._refresh_database_coverage(delta)
+ self._refresh_database_coverage()
- def _refresh_database_metadata(self):
+ def refresh_metadata(self, progress_callback=None, force=False):
"""
Refresh the database metadata cache utilized by the director.
+
+ Returns a future (Queue) that will carry the completion message.
"""
- logger.debug("Refreshing database metadata")
- # compute the metadata for the current state of the database
- new_metadata = DatabaseMetadata()
- new_metadata.build_metadata()
+ #
+ # if this is the first time the director is going to use / populate
+ # the database metadata, register the director for notifications of
+ # metadata modification (this should only happen once)
+ #
+ # TODO: this is a little dirty, but it will suffice.
+ #
+
+ if not self.metadata.cached:
+ self.metadata.function_renamed(self._notify_metadata_modified)
+
+ #
+ # if the lighthouse has collected metadata previously for this IDB
+ # session (eg, it is cached), ignore a request to refresh it unless
+ # explicitly told to refresh via force=True
+ #
- # compute the delta between the old metadata, and latest
- delta = MetadataDelta(new_metadata, self.metadata)
+ if self.metadata.cached and not force:
+ fake_queue = Queue.Queue()
+ fake_queue.put(False)
+ return fake_queue
- # save the new metadata in place of the old metadata
- self._database_metadata = new_metadata
+ # start the asynchronous metadata refresh
+ result_queue = self.metadata.refresh(progress_callback=progress_callback)
- # finally, return the list of nodes that have changed (the delta)
- return delta
+ # return the channel that will carry asynchronous result
+ return result_queue
- def _refresh_database_coverage(self, delta):
+ def _refresh_database_coverage(self):
"""
Refresh all the database coverage mappings managed by the director.
"""
logger.debug("Refreshing database coverage mappings")
- for name in self.all_names:
+ for i, name in enumerate(self.all_names, 1):
logger.debug(" - %s" % name)
+ idaapi.replace_wait_box(
+ "Refreshing coverage mapping %u/%u" % (i, len(self.all_names))
+ )
coverage = self.get_coverage(name)
- coverage.update_metadata(self.metadata, delta)
+ coverage.update_metadata(self.metadata)
coverage.refresh()
def _refresh_aggregate(self):
diff --git a/plugin/lighthouse/metadata.py b/plugin/lighthouse/metadata.py
index f3d06d81..5cfa9686 100644
--- a/plugin/lighthouse/metadata.py
+++ b/plugin/lighthouse/metadata.py
@@ -71,6 +71,9 @@ def __init__(self):
# database defined functions
self.functions = {}
+ # database metadata cache status
+ self.cached = False
+
# lookup list members
self._stale_lookup = False
self._name2func = {}
@@ -78,6 +81,13 @@ def __init__(self):
self._node_addresses = []
self._function_addresses = []
+ # hook to listen for rename events from IDA
+ self._rename_hooks = RenameHooks()
+ self._rename_hooks.renamed = self._name_changed
+
+ # metadata callbacks (see director for more info)
+ self._function_renamed_callbacks = []
+
# asynchrnous metadata collection thread
self._refresh_worker = None
self._stop_threads = False
@@ -259,6 +269,12 @@ def flatten_blocks(self, basic_blocks):
# return the list of addresses
return output
+ def is_big(self):
+ """
+ Return an size classification of the database / metadata.
+ """
+ return len(self.functions) > 50000
+
#--------------------------------------------------------------------------
# Refresh
#--------------------------------------------------------------------------
@@ -287,6 +303,15 @@ def refresh(self, function_addresses=None, progress_callback=None):
removed_functions = self.functions.viewkeys() - set(function_addresses)
for function_address in removed_functions:
+
+ # the function to delete
+ function_metadata = self.functions[function_address]
+
+ # delete all node metadata owned by this function from the db list
+ for node in function_metadata.nodes.itervalues():
+ del self.nodes[node.address]
+
+ # now delete the function metadata from the db list
del self.functions[function_address]
# schedule a deferred lookup list refresh if we deleted any functions
@@ -345,6 +370,7 @@ def abort_refresh(self, join=False):
if not (worker and worker.is_alive()):
self._stop_threads = False
+ self._refresh_worker = None
return
# signal the worker thread to stop
@@ -359,17 +385,27 @@ def _async_refresh(self, result_queue, function_addresses, progress_callback):
Internal asynchronous metadata collection worker.
"""
+ # pause our rename listening hooks, for speed
+ self._rename_hooks.unhook()
+
# collect metadata
- completed = self._async_collect_metadata(function_addresses, progress_callback)
+ completed = self._async_collect_metadata(
+ function_addresses,
+ progress_callback
+ )
# refresh the lookup lists
self._refresh_lookup()
+ # resume our rename listening hooks
+ self._rename_hooks.hook()
+
# send the refresh result (good/bad) incase anyone is still listening
if completed:
- result_queue.put(self)
+ self.cached = True
+ result_queue.put(True)
else:
- result_queue.put(None)
+ result_queue.put(False)
# clean up our thread's reference as it is basically done/dead
self._refresh_worker = None
@@ -401,6 +437,7 @@ def _refresh_lookup(self):
return False
# update the lookup lists
+ self._last_node = []
self._name2func = { f.name: f.address for f in self.functions.itervalues() }
self._node_addresses = sorted(self.nodes.keys())
self._function_addresses = sorted(self.functions.keys())
@@ -524,6 +561,63 @@ def _update_functions(self, fresh_metadata):
# return the delta for other interested consumers to use
return delta
+ #--------------------------------------------------------------------------
+ # Signal Handlers
+ #--------------------------------------------------------------------------
+
+ @mainthread
+ def _name_changed(self, address, new_name, local_name):
+ """
+ Handler for rename event in IDA.
+ """
+
+ # we should never care about local renames (eg, loc_40804b), ignore
+ if local_name or new_name.startswith("loc_"):
+ return 0
+
+ # get the function that this address falls within
+ function = self.get_function(address)
+
+ # if the address does not fall within a function (might happen?), ignore
+ if not function:
+ return 0
+
+ #
+ # ensure the renamed address matches the function start before
+ # renaming the function in our metadata cache.
+ #
+ # I am not sure when this would not be the case (globals? maybe)
+ # but I'd rather not find out.
+ #
+
+ if address == function.address:
+ logger.debug("Name changing @ 0x%X" % address)
+ logger.debug(" Old name: %s" % function.name)
+ function.name = idaapi.get_short_name(address)
+ logger.debug(" New name: %s" % function.name)
+
+ # notify any listeners that a function rename occurred
+ self._notify_function_renamed()
+
+ # necessary for IDP/IDB_Hooks
+ return 0
+
+ #--------------------------------------------------------------------------
+ # Callbacks
+ #--------------------------------------------------------------------------
+
+ def function_renamed(self, callback):
+ """
+ Subscribe a callback for function rename events.
+ """
+ register_callback(self._function_renamed_callbacks, callback)
+
+ def _notify_function_renamed(self):
+ """
+ Notify listeners of a function rename event.
+ """
+ notify_callback(self._function_renamed_callbacks)
+
#------------------------------------------------------------------------------
# Function Level Metadata
#------------------------------------------------------------------------------
@@ -580,21 +674,7 @@ def _refresh_name(self):
"""
Refresh the function name against the open database.
"""
- if using_ida7api:
- self.name = idaapi.get_func_name(self.address)
- else:
- self.name = idaapi.get_func_name2(self.address)
-
- #
- # the replace is sort of a 'special case' for the 'Prefix' IDA
- # plugin: https://github.com/gaasedelen/prefix
- #
- # % signs are used as a marker byte for the prefix. we simply
- # replace the % signs with a '_' before displaying them. this
- # technically mirrors the behavior of IDA's functions view
- #
-
- self.name = self.name.replace("%", "_")
+ self.name = idaapi.get_short_name(self.address)
def _refresh_nodes(self):
"""
@@ -635,6 +715,23 @@ def _refresh_nodes(self):
if node_start == node_end:
continue
+ #
+ # if the current node_start address does not fall within the
+ # original / entry 'function chunk', we want to ignore it.
+ #
+ # this check is used as an attempt to ignore the try/catch/SEH
+ # exception handling blocks that IDA 7 parses and displays in
+ # the graph view (and therefore, the flowcahrt).
+ #
+ # practically speaking, 99% of the time people aren't going to be
+ # interested in the coverage information on their exception
+ # handlers. I am skeptical that dynamic instrumentation tools
+ # would be able to collect coverage in these handlers anway...
+ #
+
+ if idaapi.get_func_chunknum(function, node_start):
+ continue
+
# create a new metadata object for this node
node_metadata = NodeMetadata(node_start, node_end, node_id)
@@ -675,18 +772,6 @@ def _finalize(self):
self.instruction_count = sum(node.instruction_count for node in self.nodes.itervalues())
self.cyclomatic_complexity = self.edge_count - self.node_count + 2
- #--------------------------------------------------------------------------
- # Signal Handlers
- #--------------------------------------------------------------------------
-
- def name_changed(self, new_name):
- """
- Handler for rename event in IDA.
-
- TODO: hook this up
- """
- self.name = new_name
-
#--------------------------------------------------------------------------
# Operator Overloads
#--------------------------------------------------------------------------
@@ -997,3 +1082,14 @@ def metadata_progress(completed, total):
Handler for metadata collection callback, updates progress dialog.
"""
idaapi.replace_wait_box("Collected metadata for %u/%u Functions" % (completed, total))
+
+#--------------------------------------------------------------------------
+# Event Hooks
+#--------------------------------------------------------------------------
+
+if using_ida7api:
+ class RenameHooks(idaapi.IDB_Hooks):
+ pass
+else:
+ class RenameHooks(idaapi.IDP_Hooks):
+ pass
diff --git a/plugin/lighthouse/painting.py b/plugin/lighthouse/painting.py
index 71b6e7ce..06f92681 100644
--- a/plugin/lighthouse/painting.py
+++ b/plugin/lighthouse/painting.py
@@ -34,6 +34,18 @@ def __init__(self, director, palette):
self._painted_nodes = set()
self._painted_instructions = set()
+ #----------------------------------------------------------------------
+ # HexRays Hooking
+ #----------------------------------------------------------------------
+
+ #
+ # we attempt to hook hexrays the *first* time a repaint request is
+ # made. the assumption being that IDA is fully loaded and if hexrays is
+ # present, it will definitely be available (for hooking) by this time
+ #
+
+ self._attempted_hook = False
+
#----------------------------------------------------------------------
# Async
#----------------------------------------------------------------------
@@ -63,11 +75,6 @@ def __init__(self, director, palette):
# Callbacks
#----------------------------------------------------------------------
- self._hooks = PainterHooks()
- self._hooks.tform_visible = self._init_hexrays_hooks # IDA 6.x
- self._hooks.widget_visible = self._init_hexrays_hooks # IDA 7.x
- self._hooks.hook()
-
# register for cues from the director
self._director.coverage_switched(self.repaint)
self._director.coverage_modified(self.repaint)
@@ -85,23 +92,9 @@ def terminate(self):
# Initialization
#--------------------------------------------------------------------------
- def _init_hexrays_hooks(self, widget, _=None):
+ def _init_hexrays_hooks(self):
"""
Install Hex-Rrays hooks (when available).
-
- NOTE:
-
- This is called when the tform/widget_visible event fires. The
- use of this event is somewhat arbitrary. It is simply an
- event that fires at least once after things seem mostly setup.
-
- We were using UI_Hooks.ready_to_run previously, but it appears
- that this event fires *before* this plugin is loaded depending
- on the user's individual copy of IDA.
-
- This approach seems relatively consistent for inividuals and builds
- from IDA 6.8 --> 7.0.
-
"""
result = False
@@ -111,14 +104,6 @@ def _init_hexrays_hooks(self, widget, _=None):
logger.debug("HexRays hooked: %r" % result)
- #
- # we only use self._hooks (IDP_Hooks) to install our hexrays hooks.
- # since this 'init' function should only ever be called once, remove
- # our IDP_Hooks now to clean up after ourselves.
- #
-
- self._hooks.unhook()
-
#------------------------------------------------------------------------------
# Painting
#------------------------------------------------------------------------------
@@ -127,6 +112,13 @@ def repaint(self):
"""
Paint coverage defined by the current database mappings.
"""
+
+ # attempt to hook hexrays *once*
+ if not self._attempted_hook:
+ self._init_hexrays_hooks()
+ self._attempted_hook = True
+
+ # signal the painting thread that it's time to repaint coverage
self._repaint_requested = True
self._repaint_request.set()
@@ -695,13 +687,3 @@ def _async_action(self, paint_action, work_iterable):
# operation completed successfully
return True
-
-#------------------------------------------------------------------------------
-# Painter Hooks
-#------------------------------------------------------------------------------
-
-class PainterHooks(idaapi.UI_Hooks):
- """
- This is a concrete stub of IDA's UI_Hooks.
- """
- pass
diff --git a/plugin/lighthouse/palette.py b/plugin/lighthouse/palette.py
index 1af72da9..a132c6fa 100644
--- a/plugin/lighthouse/palette.py
+++ b/plugin/lighthouse/palette.py
@@ -42,14 +42,15 @@ def __init__(self):
# IDA Views / HexRays
#
- self._ida_coverage = [0x990000, 0xC8E696] # NOTE: IDA uses BBGGRR
+ self._ida_coverage = [0x990000, 0xFFE2A8] # NOTE: IDA uses BBGGRR
#
# Composing Shell
#
- self._composer_bg = [QtGui.QColor(30, 30, 30), QtGui.QColor(30, 30, 30)]
- self._composer_fg = [QtGui.QColor(255, 255, 255), QtGui.QColor(255, 255, 255)]
+ self._overview_bg = [QtGui.QColor(20, 20, 20), QtGui.QColor(20, 20, 20)]
+ self._composer_fg = [QtGui.QColor(255, 255, 255), QtGui.QColor(255, 255, 255)]
+
self._valid_text = [0x80F0FF, 0x0000FF]
self._invalid_text = [0xF02070, 0xFF0000]
self._invalid_highlight = [0x990000, 0xFF0000]
@@ -207,8 +208,8 @@ def ida_coverage(self):
#--------------------------------------------------------------------------
@property
- def composer_bg(self):
- return self._composer_bg[self.qt_theme]
+ def overview_bg(self):
+ return self._overview_bg[self.qt_theme]
@property
def composer_fg(self):
diff --git a/plugin/lighthouse/parsers/drcov.py b/plugin/lighthouse/parsers/drcov.py
index 5911839c..726fc3a7 100644
--- a/plugin/lighthouse/parsers/drcov.py
+++ b/plugin/lighthouse/parsers/drcov.py
@@ -40,21 +40,58 @@ def __init__(self, filepath=None):
# Public
#--------------------------------------------------------------------------
- def filter_by_module(self, module_name):
+ def get_module(self, module_name, fuzzy=True):
+ """
+ Get a module by its name.
+
+ Note that this is a 'fuzzy' lookup by default.
+ """
+
+ # fuzzy module name lookup
+ if fuzzy:
+
+ # attempt lookup using case-insensitive filename
+ for module in self.modules:
+ if module_name.lower() in module.filename.lower():
+ return module
+
+ #
+ # no hits yet... let's cleave the extension from the given module
+ # name (if present) and try again
+ #
+
+ if "." in module_name:
+ module_name = module_name.split(".")[0]
+
+ # attempt lookup using case-insensitive filename without extension
+ for module in self.modules:
+ if module_name.lower() in module.filename.lower():
+ return module
+
+ # strict lookup
+ else:
+ for module in self.modules:
+ if module_name == module.filename:
+ return module
+
+ # no matching module exists
+ return None
+
+ def get_blocks_by_module(self, module_name):
"""
Extract coverage blocks pertaining to the named module.
"""
# locate the coverage that matches the given module_name
- for module in self.modules:
- if module.filename.lower() == module_name.lower():
- mod_id = module.id
- break
+ module = self.get_module(module_name)
- # failed to find a module that matches the given name, bail
- else:
+ # if we fail to find a module that matches the given name, bail
+ if not module:
raise ValueError("Failed to find module '%s' in coverage data" % module_name)
+ # extract module id for speed
+ mod_id = module.id
+
# loop through the coverage data and filter out data for only this module
coverage_blocks = [(bb.start, bb.size) for bb in self.basic_blocks if bb.mod_id == mod_id]
@@ -357,5 +394,3 @@ class DrcovBasicBlock(Structure):
x = DrcovData(argv[1])
for bb in x.basic_blocks:
print "0x%08x" % bb.start
-
-
diff --git a/plugin/lighthouse/ui/coverage_combobox.py b/plugin/lighthouse/ui/coverage_combobox.py
index 37a42ba5..e8d560c3 100644
--- a/plugin/lighthouse/ui/coverage_combobox.py
+++ b/plugin/lighthouse/ui/coverage_combobox.py
@@ -317,7 +317,7 @@ def refresh(self):
# 'Aggregate', or the 'seperator' indexes
#
- if model.data(model.index(row, 0), QtCore.Qt.AccessibleDescriptionRole) != ENTRY_USER:
+ if not model.data(model.index(row, 1), QtCore.Qt.DecorationRole):
self.setSpan(row, 0, 1, model.columnCount())
# this is a user entry, ensure there is no span present (clear it)
@@ -475,10 +475,27 @@ def data(self, index, role=QtCore.Qt.DisplayRole):
if index.column() == COLUMN_COVERAGE_STRING and index.row() != self._seperator_index:
return self._director.get_coverage_string(self._entries[index.row()])
- # 'X' icon data request
+ # icon display request
elif role == QtCore.Qt.DecorationRole:
- if index.column() == COLUMN_DELETE and index.row() > self._seperator_index:
- return self._delete_icon
+
+ # the icon request is for the 'X' column
+ if index.column() == COLUMN_DELETE:
+
+ #
+ # if the coverage entry is below the seperator, it is a user
+ # loaded coverage and should always be deletable
+ #
+
+ if index.row() > self._seperator_index:
+ return self._delete_icon
+
+ #
+ # as a special case, we allow the aggregate to have a clear
+ # icon, which will clear all user loaded coverages
+ #
+
+ elif self._entries[index.row()] == "Aggregate":
+ return self._delete_icon
# entry type request
elif role == QtCore.Qt.AccessibleDescriptionRole:
diff --git a/plugin/lighthouse/ui/coverage_overview.py b/plugin/lighthouse/ui/coverage_overview.py
index 6c8482d8..44a708c7 100644
--- a/plugin/lighthouse/ui/coverage_overview.py
+++ b/plugin/lighthouse/ui/coverage_overview.py
@@ -8,7 +8,7 @@
from lighthouse.util import *
from .coverage_combobox import CoverageComboBox
from lighthouse.composer import ComposingShell
-from lighthouse.metadata import FunctionMetadata
+from lighthouse.metadata import FunctionMetadata, metadata_progress
from lighthouse.coverage import FunctionCoverage
logger = logging.getLogger("Lighthouse.UI.Overview")
@@ -55,14 +55,56 @@
# Pseudo Widget Filter
#------------------------------------------------------------------------------
+debugger_docked = False
+
class EventProxy(QtCore.QObject):
def __init__(self, target):
super(EventProxy, self).__init__()
self._target = target
def eventFilter(self, source, event):
+
+ #
+ # hook the destroy event of the coverage overview widget so that we can
+ # cleanup after ourselves in the interest of stability
+ #
+
if int(event.type()) == 16: # NOTE/COMPAT: QtCore.QEvent.Destroy not in IDA7?
self._target.terminate()
+
+ #
+ # this is an unknown event, but it seems to fire when the widget is
+ # being saved/restored by a QMainWidget. We use this to try and ensure
+ # the Coverage Overview stays docked when flipping between Reversing
+ # and Debugging states in IDA.
+ #
+ # See issue #16 on github for more information.
+ #
+
+ if int(event.type()) == 2002:
+
+ #
+ # if the general registers IDA View exists, we make the assumption
+ # that the user has probably started debugging.
+ #
+
+ # NOTE / COMPAT:
+ if using_ida7api:
+ debug_mode = bool(idaapi.find_widget("General registers"))
+ else:
+ debug_mode = bool(idaapi.find_tform("General registers"))
+
+ #
+ # if this is the first time the user has started debugging, dock
+ # the coverage overview in the debug QMainWidget workspace. its
+ # dock status / position should persist future debugger launches.
+ #
+
+ global debugger_docked
+ if debug_mode and not debugger_docked:
+ idaapi.set_dock_pos(self._target._title, "Structures", idaapi.DP_TAB)
+ debugger_docked = True
+
return False
#------------------------------------------------------------------------------
@@ -80,7 +122,10 @@ def __init__(self, director):
plugin_resource(os.path.join("icons", "overview.png"))
)
- # internal
+ # local reference to the director
+ self._director = director
+
+ # underlying data model for the coverage overview
self._model = CoverageModel(director, self._widget)
# pseudo widget science
@@ -89,7 +134,7 @@ def __init__(self, director):
self._widget.installEventFilter(self._events)
# initialize the plugin UI
- self._ui_init(director)
+ self._ui_init()
# refresh the data UI such that it reflects the most recent data
self.refresh()
@@ -121,7 +166,7 @@ def isVisible(self):
# Initialization - UI
#--------------------------------------------------------------------------
- def _ui_init(self, director):
+ def _ui_init(self):
"""
Initialize UI elements.
"""
@@ -131,22 +176,24 @@ def _ui_init(self, director):
self._font_metrics = QtGui.QFontMetricsF(self._font)
# initialize our ui elements
- self._ui_init_table(director)
- self._ui_init_toolbar(director)
+ self._ui_init_table()
+ self._ui_init_toolbar()
+ self._ui_init_ctx_menu_actions()
self._ui_init_signals()
# layout the populated ui just before showing it
self._ui_layout()
- def _ui_init_table(self, director):
+ def _ui_init_table(self):
"""
Initialize the coverage table.
"""
+ palette = self._director._palette
self._table = QtWidgets.QTableView()
self._table.setFocusPolicy(QtCore.Qt.NoFocus)
self._table.setStyleSheet(
- "QTableView { gridline-color: black; } " +
- "QTableView::item:selected { color: white; background-color: %s; } " % director._palette.selection.name()
+ "QTableView { gridline-color: black; background-color: %s } " % palette.overview_bg.name() +
+ "QTableView::item:selected { color: white; background-color: %s; } " % palette.selection.name()
)
# set these properties so the user can arbitrarily shrink the table
@@ -193,13 +240,13 @@ def _ui_init_table(self, director):
self._table.setSortingEnabled(True)
hh.setSortIndicator(FUNC_ADDR, QtCore.Qt.AscendingOrder)
- def _ui_init_toolbar(self, director):
+ def _ui_init_toolbar(self):
"""
Initialize the coverage toolbar.
"""
# initialize toolbar elements
- self._ui_init_toolbar_elements(director)
+ self._ui_init_toolbar_elements()
# populate the toolbar
self._toolbar = QtWidgets.QToolBar()
@@ -225,20 +272,20 @@ def _ui_init_toolbar(self, director):
self._toolbar.addWidget(self._hide_zero_label)
self._toolbar.addWidget(self._hide_zero_checkbox)
- def _ui_init_toolbar_elements(self, director):
+ def _ui_init_toolbar_elements(self):
"""
Initialize the coverage toolbar UI elements.
"""
# the composing shell
self._shell = ComposingShell(
- director,
+ self._director,
weakref.proxy(self._model),
self._table
)
# the coverage combobox
- self._combobox = CoverageComboBox(director)
+ self._combobox = CoverageComboBox(self._director)
# the checkbox to hide 0% coverage entries
self._hide_zero_label = QtWidgets.QLabel("Hide 0% Coverage: ")
@@ -273,6 +320,23 @@ def _ui_init_toolbar_elements(self, director):
# give the shell expansion preference over the combobox
self._splitter.setStretchFactor(0, 1)
+ def _ui_init_ctx_menu_actions(self):
+ """
+ Initialize the right click context menu actions.
+ """
+
+ # function actions
+ self._action_rename = QtWidgets.QAction("Rename", None)
+ self._action_copy_name = QtWidgets.QAction("Copy name", None)
+ self._action_copy_address = QtWidgets.QAction("Copy address", None)
+
+ # function prefixing actions
+ self._action_prefix = QtWidgets.QAction("Prefix selected functions", None)
+ self._action_clear_prefix = QtWidgets.QAction("Clear prefixes", None)
+
+ # misc actions
+ self._action_refresh_metadata = QtWidgets.QAction("Full refresh (slow)", None)
+
def _ui_init_signals(self):
"""
Connect UI signals.
@@ -282,8 +346,8 @@ def _ui_init_signals(self):
self._table.doubleClicked.connect(self._ui_entry_double_click)
# right click popup menu
- #self._table.setContextMenuPolicy(Qt.CustomContextMenu)
- #self._table.customContextMenuRequested.connect(...)
+ self._table.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
+ self._table.customContextMenuRequested.connect(self._ui_ctx_menu_handler)
# toggle 0% coverage checkbox
self._hide_zero_checkbox.stateChanged.connect(self._ui_hide_zero_toggle)
@@ -307,19 +371,154 @@ def _ui_layout(self):
def _ui_entry_double_click(self, index):
"""
- Handle double click event on the coverage table view.
+ Handle double click event on the coverage table.
A double click on the coverage table view will jump the user to
the corresponding function in the IDA disassembly view.
"""
idaapi.jumpto(self._model.row2func[index.row()])
+ def _ui_ctx_menu_handler(self, position):
+ """
+ Handle right click context menu event on the coverage table.
+ """
+
+ # create a right click menu based on the state and context
+ ctx_menu = self._populate_ctx_menu()
+ if not ctx_menu:
+ return
+
+ # show the popup menu to the user, and wait for their selection
+ action = ctx_menu.exec_(self._table.viewport().mapToGlobal(position))
+
+ # process the user action
+ self._process_ctx_menu_action(action)
+
def _ui_hide_zero_toggle(self, checked):
"""
Handle state change of 'Hide 0% Coverage' checkbox.
"""
self._model.filter_zero_coverage(checked)
+ #--------------------------------------------------------------------------
+ # Context Menu
+ #--------------------------------------------------------------------------
+
+ def _populate_ctx_menu(self):
+ """
+ Populate a context menu for the table view based on selection.
+
+ Returns a populated QMenu, or None.
+ """
+
+ # get the list rows currently selected in the coverage table
+ selected_rows = self._table.selectionModel().selectedRows()
+ if len(selected_rows) == 0:
+ return None
+
+ # the context menu we will dynamically populate
+ ctx_menu = QtWidgets.QMenu()
+
+ #
+ # if there is only one table entry (a function) selected, then
+ # show the menu actions available for a single function such as
+ # copy function name, address, or renaming the function.
+ #
+
+ if len(selected_rows) == 1:
+ ctx_menu.addAction(self._action_rename)
+ ctx_menu.addAction(self._action_copy_name)
+ ctx_menu.addAction(self._action_copy_address)
+ ctx_menu.addSeparator()
+
+ # function prefixing actions
+ ctx_menu.addAction(self._action_prefix)
+ ctx_menu.addAction(self._action_clear_prefix)
+ ctx_menu.addSeparator()
+
+ # misc actions
+ ctx_menu.addAction(self._action_refresh_metadata)
+
+ # return the completed context menu
+ return ctx_menu
+
+ def _process_ctx_menu_action(self, action):
+ """
+ Process the given (user selected) context menu action.
+ """
+
+ # a right click menu action was not clicked. nothing else to do
+ if not action:
+ return
+
+ # get the list rows currently selected in the coverage table
+ selected_rows = self._table.selectionModel().selectedRows()
+ if len(selected_rows) == 0:
+ return
+
+ #
+ # extract the function addresses for the list of selected rows
+ # as they will probably come in handy later.
+ #
+
+ function_addresses = []
+ for index in selected_rows:
+ address = self._model.row2func[index.row()]
+ function_addresses.append(address)
+
+ #
+ # check the universal actions first
+ #
+
+ # handle the 'Prefix functions' action
+ if action == self._action_prefix:
+ gui_prefix_functions(function_addresses)
+
+ # handle the 'Clear prefix' action
+ elif action == self._action_clear_prefix:
+ clear_prefixes(function_addresses)
+
+ # handle the 'Refresh metadata' action
+ elif action == self._action_refresh_metadata:
+
+ idaapi.show_wait_box("Building database metadata...")
+ self._director.refresh()
+
+ # ensure the table's model gets refreshed
+ idaapi.replace_wait_box("Refreshing Coverage Overview...")
+ self.refresh()
+
+ # all done
+ idaapi.hide_wait_box()
+
+ #
+ # the following actions are only applicable if there is only one
+ # row/function selected in the coverage overview table. don't
+ # bother to check multi-function selections against these
+ #
+
+ if len(selected_rows) != 1:
+ return
+
+ # unpack the single QModelIndex
+ index = selected_rows[0]
+ function_address = function_addresses[0]
+
+ # handle the 'Rename' action
+ if action == self._action_rename:
+ gui_rename_function(function_address)
+
+ # handle the 'Copy name' action
+ elif action == self._action_copy_name:
+ name_index = self._model.index(index.row(), FUNC_NAME)
+ function_name = self._model.data(name_index, QtCore.Qt.DisplayRole)
+ copy_to_clipboard(function_name)
+
+ # handle the 'Copy address' action
+ elif action == self._action_copy_address:
+ address_string = "0x%X" % function_address
+ copy_to_clipboard(address_string)
+
#--------------------------------------------------------------------------
# Refresh
#--------------------------------------------------------------------------
@@ -346,7 +545,7 @@ def __init__(self, director, parent=None):
super(CoverageModel, self).__init__(parent)
self._blank_coverage = FunctionCoverage(idaapi.BADADDR)
- # the data source
+ # local reference to the director
self._director = director
# mapping to correlate a given row in the table to its function coverage
@@ -400,6 +599,7 @@ def __init__(self, director, parent=None):
# register for cues from the director
self._director.coverage_switched(self._internal_refresh)
self._director.coverage_modified(self._internal_refresh)
+ self._director.metadata_modified(self._data_changed)
#--------------------------------------------------------------------------
# AbstractItemModel Overloads
@@ -455,8 +655,27 @@ def data(self, index, role=QtCore.Qt.DisplayRole):
column = index.column()
# lookup the function info for this row
- function_address = self.row2func[index.row()]
- function_metadata = self._director.metadata.functions[function_address]
+ try:
+ function_address = self.row2func[index.row()]
+ function_metadata = self._director.metadata.functions[function_address]
+
+ #
+ # if we hit a KeyError, it is probably because the database metadata
+ # is being refreshed and the model (this object) has yet to be
+ # updated.
+ #
+ # this should only ever happen as a result of the user using the
+ # right click 'Refresh metadata' action. And even then, only when
+ # a function they undefined in the IDB is visible in the coverage
+ # overview table view.
+ #
+ # In theory, the table should get refreshed *after* the metadata
+ # refresh completes. So for now, we simply return return the filler
+ # string '?'
+ #
+
+ except KeyError:
+ return "?"
#
# remember, if a function does *not* have coverage data, it will
@@ -613,8 +832,19 @@ def get_modeled_coverage_percent(self):
"""
Get the coverage % represented by the current (visible) model.
"""
- sum_coverage = sum(cov.instruction_percent for cov in self._visible_coverage.itervalues())
- return (sum_coverage / (self._row_count or 1))*100
+
+ # sum the # of instructions in all the visible functions
+ instruction_count = sum(
+ meta.instruction_count for meta in self._visible_metadata.itervalues()
+ )
+
+ # sum the # of instructions executed in all the visible functions
+ instructions_executed = sum(
+ cov.instructions_executed for cov in self._visible_coverage.itervalues()
+ )
+
+ # compute coverage percentage of the visible functions
+ return (float(instructions_executed) / (instruction_count or 1))*100
#--------------------------------------------------------------------------
# Filters
@@ -666,6 +896,13 @@ def _internal_refresh(self):
# sort the data set according to the last selected sorted column
self.sort(self._last_sort, self._last_sort_order)
+ @idafast
+ def _data_changed(self):
+ """
+ Notify attached views that simple model data has been updated/modified.
+ """
+ self.dataChanged.emit(QtCore.QModelIndex(), QtCore.QModelIndex())
+
def _refresh_data(self):
"""
Initialize the mapping to go from displayed row to function.
diff --git a/plugin/lighthouse/util/ida.py b/plugin/lighthouse/util/ida.py
index dc9a1a5c..55d9686d 100644
--- a/plugin/lighthouse/util/ida.py
+++ b/plugin/lighthouse/util/ida.py
@@ -46,7 +46,7 @@ def map_line2citem(decompilation_text):
for line_number in xrange(decompilation_text.size()):
line_text = decompilation_text[line_number].line
line2citem[line_number] = lex_citem_indexes(line_text)
- logger.debug("Line Text: %s" % binascii.hexlify(line_text))
+ #logger.debug("Line Text: %s" % binascii.hexlify(line_text))
return line2citem
@@ -475,23 +475,19 @@ def thunk():
# IDA Async Magic
#------------------------------------------------------------------------------
-@mainthread
-def await_future(future, block=True, timeout=1.0):
+def await_future(future):
"""
This is effectively a technique I use to get around completely blocking
IDA's mainthread while waiting for a threaded result that may need to make
- use of the sync operators.
+ use of the execute_sync operators.
Waiting for a 'future' thread result to come through via this function
lets other execute_sync actions to slip through (at least Read, Fast).
"""
-
- elapsed = 0 # total time elapsed processing this future object
interval = 0.02 # the interval which we wait for a response
- end_time = time.time() + timeout
- # run until the the future completes or the timeout elapses
- while block or (time.time() < end_time):
+ # run until the the future arrives
+ while True:
# block for a brief period to see if the future completes
try:
@@ -503,25 +499,151 @@ def await_future(future, block=True, timeout=1.0):
#
except Queue.Empty as e:
- logger.debug("Flushing future...")
+ pass
+
+ logger.debug("Awaiting future...")
+
+ #
+ # if we are executing (well, blocking) as the main thread, we need
+ # to flush the event loop so IDA does not hang
+ #
+
+ if idaapi.is_main_thread():
flush_ida_sync_requests()
+def await_lock(lock):
+ """
+ Attempt to acquire a lock without blocking the IDA mainthread.
+
+ See await_future() for more details.
+ """
+
+ elapsed = 0 # total time elapsed waiting for the lock
+ interval = 0.02 # the interval (in seconds) between acquire attempts
+ timeout = 60.0 # the total time allotted to acquiring the lock
+ end_time = time.time() + timeout
+
+ # wait until the the lock is available
+ while time.time() < end_time:
+
+ #
+ # attempt to acquire the given lock without blocking (via 'False').
+ # if we succesfully aquire the lock, then we can return (success)
+ #
+
+ if lock.acquire(False):
+ logger.debug("Acquired lock!")
+ return
+
+ #
+ # the lock is not available yet. we need to sleep so we don't choke
+ # the cpu, and try to acquire the lock again next time through...
+ #
+
+ logger.debug("Awaiting lock...")
+ time.sleep(interval)
+
+ #
+ # if we are executing (well, blocking) as the main thread, we need
+ # to flush the event loop so IDA does not hang
+ #
+
+ if idaapi.is_main_thread():
+ flush_ida_sync_requests()
+
+ #
+ # we spent 60 seconds trying to acquire the lock, but never got it...
+ # to avoid hanging IDA indefinitely (or worse), we abort via signal
+ #
+
+ raise RuntimeError("Failed to acquire lock after %f seconds!" % timeout)
+
@mainthread
def flush_ida_sync_requests():
"""
Flush all execute_sync requests.
-
- NOTE: This MUST be called from the IDA Mainthread to be effective.
"""
- if not idaapi.is_main_thread():
- return False
# this will trigger/flush the IDA UI loop
qta = QtCore.QCoreApplication.instance()
qta.processEvents()
- # done
- return True
+#------------------------------------------------------------------------------
+# IDA Util
+#------------------------------------------------------------------------------
+
+# taken from https://github.com/gaasedelen/prefix
+PREFIX_DEFAULT = "MyPrefix"
+PREFIX_SEPARATOR = '%'
+
+def prefix_function(function_address, prefix):
+ """
+ Prefix a function name with the given string.
+ """
+ original_name = get_function_name(function_address)
+ new_name = str(prefix) + PREFIX_SEPARATOR + str(original_name)
+
+ # rename the function with the newly prefixed name
+ idaapi.set_name(function_address, new_name, idaapi.SN_NOWARN)
+
+def prefix_functions(function_addresses, prefix):
+ """
+ Prefix a list of functions with the given string.
+ """
+ for function_address in function_addresses:
+ prefix_function(function_address, prefix)
+
+def clear_prefix(function_address):
+ """
+ Clear the prefix from a given function.
+ """
+ original_name = get_function_name(function_address)
+
+ #
+ # locate the last (rfind) prefix separator in the function name as
+ # we will want to keep everything that comes after it
+ #
+
+ i = original_name.rfind(PREFIX_SEPARATOR)
+
+ # if there is no prefix (separator), there is nothing to trim
+ if i == -1:
+ return
+
+ # trim the prefix off the original function name and discard it
+ new_name = original_name[i+1:]
+
+ # rename the function with the prefix stripped
+ idaapi.set_name(function_address, new_name, idaapi.SN_NOWARN)
+
+def clear_prefixes(function_addresses):
+ """
+ Clear the prefix from a list of given functions.
+ """
+ for function_address in function_addresses:
+ clear_prefix(function_address)
+
+def get_function_name(function_address):
+ """
+ Get a function's true name.
+ """
+
+ # get the original function name from the database
+ if using_ida7api:
+ original_name = idaapi.get_name(function_address)
+ else:
+ original_name = idaapi.get_true_name(idaapi.BADADDR, function_address)
+
+ # sanity check
+ if original_name == None:
+ raise ValueError("Invalid function address")
+
+ # return the function name
+ return original_name
+
+#------------------------------------------------------------------------------
+# Interactive
+#------------------------------------------------------------------------------
@mainthread
def prompt_string(label, title, default=""):
@@ -541,5 +663,50 @@ def prompt_string(label, title, default=""):
dlg.fontMetrics().averageCharWidth()*10
)
ok = dlg.exec_()
- text = dlg.textValue()
+ text = str(dlg.textValue())
return (ok, text)
+
+@mainthread
+def gui_rename_function(function_address):
+ """
+ Interactive rename of a function in the IDB.
+ """
+ original_name = get_function_name(function_address)
+
+ # prompt the user for a new function name
+ ok, new_name = prompt_string(
+ "Please enter function name",
+ "Rename Function",
+ original_name
+ )
+
+ #
+ # if the user clicked cancel, or the name they entered
+ # is identical to the original, there's nothing to do
+ #
+
+ if not (ok or new_name != original_name):
+ return
+
+ # rename the function
+ idaapi.set_name(function_address, new_name, idaapi.SN_NOCHECK)
+
+@mainthread
+def gui_prefix_functions(function_addresses):
+ """
+ Interactive prefixing of functions in the IDB.
+ """
+
+ # prompt the user for a new function name
+ ok, prefix = prompt_string(
+ "Please enter a function prefix",
+ "Prefix Function(s)",
+ PREFIX_DEFAULT
+ )
+
+ # bail if the user clicked cancel or failed to enter a prefix
+ if not (ok and prefix):
+ return
+
+ # prefix the given functions with the user specified prefix
+ prefix_functions(function_addresses, prefix)
diff --git a/plugin/lighthouse/util/log.py b/plugin/lighthouse/util/log.py
index ec70b442..f83eb38a 100644
--- a/plugin/lighthouse/util/log.py
+++ b/plugin/lighthouse/util/log.py
@@ -101,6 +101,18 @@ def cleanup_log_directory(log_directory):
def start_logging():
global logger
+ # create the Lighthouse logger
+ logger = logging.getLogger("Lighthouse")
+
+ #
+ # only enable logging if the LIGHTHOUSE_LOGGING environment variable is
+ # present. we simply return a stub logger to sinkhole messages.
+ #
+
+ if os.getenv("LIGHTHOUSE_LOGGING") == None:
+ logger.disabled = True
+ return logger
+
# create a directory for lighthouse logs if it does not exist
log_dir = get_log_dir()
if not os.path.exists(log_dir):
@@ -117,9 +129,6 @@ def start_logging():
level=logging.DEBUG
)
- # create the Lighthouse logger
- logger = logging.getLogger("Lighthouse")
-
# proxy STDOUT/STDERR to the log files too
stdout_logger = logging.getLogger('Lighthouse.STDOUT')
stderr_logger = logging.getLogger('Lighthouse.STDERR')
diff --git a/plugin/lighthouse/util/misc.py b/plugin/lighthouse/util/misc.py
index 343b6783..c38ad13d 100644
--- a/plugin/lighthouse/util/misc.py
+++ b/plugin/lighthouse/util/misc.py
@@ -1,4 +1,5 @@
import os
+import weakref
import collections
import idaapi
@@ -8,13 +9,14 @@
# Plugin Util
#------------------------------------------------------------------------------
+PLUGIN_PATH = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
+
def plugin_resource(resource_name):
"""
Return the full path for a given plugin resource file.
"""
return os.path.join(
- idaapi.idadir(idaapi.PLG_SUBDIR),
- "lighthouse",
+ PLUGIN_PATH,
"ui",
"resources",
resource_name
@@ -32,6 +34,24 @@ def MonospaceFont():
font.setStyleHint(QtGui.QFont.TypeWriter)
return font
+def singleshot(ms, function=None):
+ """
+ A Qt Singleshot timer that can be stopped.
+ """
+ timer = QtCore.QTimer()
+ timer.setInterval(ms)
+ timer.setSingleShot(True)
+ timer.timeout.connect(function)
+ return timer
+
+def copy_to_clipboard(data):
+ """
+ Copy the given data (a string) to the user clipboard.
+ """
+ cb = QtWidgets.QApplication.clipboard()
+ cb.clear(mode=cb.Clipboard)
+ cb.setText(data, mode=cb.Clipboard)
+
#------------------------------------------------------------------------------
# Python Util
#------------------------------------------------------------------------------
@@ -53,6 +73,83 @@ def hex_list(items):
"""
return '[{}]'.format(', '.join('0x%X' % x for x in items))
+def register_callback(callback_list, callback):
+ """
+ Register a given callable (callback) to the given callback_list.
+
+ Adapted from http://stackoverflow.com/a/21941670
+ """
+
+ # create a weakref callback to an object method
+ try:
+ callback_ref = weakref.ref(callback.__func__), weakref.ref(callback.__self__)
+
+ # create a wweakref callback to a stand alone function
+ except AttributeError:
+ callback_ref = weakref.ref(callback), None
+
+ # 'register' the callback
+ callback_list.append(callback_ref)
+
+def notify_callback(callback_list):
+ """
+ Notify the given list of registered callbacks.
+
+ The given list (callback_list) is a list of weakref'd callables
+ registered through the _register_callback function. To notify the
+ callbacks we simply loop through the list and call them.
+
+ This routine self-heals by removing dead callbacks for deleted objects.
+
+ Adapted from http://stackoverflow.com/a/21941670
+ """
+ cleanup = []
+
+ #
+ # loop through all the registered callbacks in the given callback_list,
+ # notifying active callbacks, and removing dead ones.
+ #
+
+ for callback_ref in callback_list:
+ callback, obj_ref = callback_ref[0](), callback_ref[1]
+
+ #
+ # if the callback is an instance method, deference the instance
+ # (an object) first to check that it is still alive
+ #
+
+ if obj_ref:
+ obj = obj_ref()
+
+ # if the object instance is gone, mark this callback for cleanup
+ if obj is None:
+ cleanup.append(callback_ref)
+ continue
+
+ # call the object instance callback
+ try:
+ callback(obj)
+
+ # assume a Qt cleanup/deletion occured
+ except RuntimeError as e:
+ cleanup.append(callback_ref)
+ continue
+
+ # if the callback is a static method...
+ else:
+
+ # if the static method is deleted, mark this callback for cleanup
+ if callback is None:
+ cleanup.append(callback_ref)
+ continue
+
+ # call the static callback
+ callback()
+
+ # remove the deleted callbacks
+ for callback_ref in cleanup:
+ callback_list.remove(callback_ref)
+
#------------------------------------------------------------------------------
# Coverage Util
#------------------------------------------------------------------------------
diff --git a/plugin/lighthouse_plugin.py b/plugin/lighthouse_plugin.py
index a85f2346..d609737d 100644
--- a/plugin/lighthouse_plugin.py
+++ b/plugin/lighthouse_plugin.py
@@ -20,7 +20,7 @@
# IDA Plugin
#------------------------------------------------------------------------------
-PLUGIN_VERSION = "0.6.1"
+PLUGIN_VERSION = "0.7.0"
AUTHORS = "Markus Gaasedelen"
DATE = "2017"
@@ -392,7 +392,7 @@ def interactive_load_batch(self):
# database metadata while the user will be busy selecting coverage files.
#
- future = self.director.metadata.refresh(progress_callback=metadata_progress)
+ future = self.director.refresh_metadata(progress_callback=metadata_progress)
#
# we will now prompt the user with an interactive file dialog so they
@@ -497,7 +497,7 @@ def interactive_load_file(self):
# database metadata while the user will be busy selecting coverage files.
#
- future = self.director.metadata.refresh(progress_callback=metadata_progress)
+ future = self.director.refresh_metadata(progress_callback=metadata_progress)
#
# we will now prompt the user with an interactive file dialog so they
@@ -706,7 +706,7 @@ def _normalize_coverage(self, coverage_data, metadata):
# extract the coverage relevant to this IDB (well, the root binary)
root_filename = idaapi.get_root_filename()
- coverage_blocks = coverage_data.filter_by_module(root_filename)
+ coverage_blocks = coverage_data.get_blocks_by_module(root_filename)
# rebase the basic blocks
base = idaapi.get_imagebase()
diff --git a/screenshots/context_menu.gif b/screenshots/context_menu.gif
new file mode 100644
index 00000000..c626f97a
Binary files /dev/null and b/screenshots/context_menu.gif differ