v3.0.0 - The *major* refactor release!
gowitness v3
Gosh, so much work went into this release, I really don't know where to start. v3 has been a while coming, and after I needed some changes for an internal tool, I figured now was as good a time as any to finally fix the stuff that had been bugging me. So, I branched to v3, deleted everything but the README and .gitignore files, and scaffolded a new cobra project.
I had lots of ideas, and a fresh start helped me finally get those in. A lot has changed, been reworked, refactored, upgraded, and more. I had fun building this, and hope you have fun using it!
If this were your typical mobile application, the release notes would have just read "Bug fixes and improvements". While thats true, it's not all there is to say.
overview
If I had to give a TL;DR of what changed, I'd summarise it as:
- Reworked the CLI. Commands are now properly categorised into subcommands. Flags also properly inherit from their parents now.
- Refactored the scanning and screenshotting logic. Most notably, the old preflight logic is removed. It was nice and fast, but when it mattered, it was a huge pain to deal with and came at the cost of result accuracy. Instead, in v3, results are now grabbed from network events like Network.responseReceived.
- Introduced the concept of "drivers," where chromedp was the original (and still default) driver. However, rod is also a driver option now that you can choose using a command-line flag.
- Significantly improved and fixed code quality, concurrency-related issues, and general screenshot reliability.
- Rewrote the report web server frontend in React. It just looks so much better, has significantly more features, and is easier to change now.
- Added an official API, complete with code-generated Swagger documentation!
- Introduced the concept of "writers," which can be used simultaneously. For example, you could write results to an SQLite database, JSON Lines, and stdout all at the same time.
- Fixed perception hashing to use Hamming Distance for grouping.
There's a lot more that's changed, so if you're curious about that and want a bit of story time, feel free to continue reading.
cgo and SQLite
SQLite has been a (required) storage mechanism since version 2. I experimented with using buntdb in version 1, but that didn't last long. It's much nicer using a format other programs could easily build on top of.
Anyway, the problem with SQLite and Golang is that the drivers often compile against the SQLite C headers. This makes cross-compilation in Go harder than it needs to be, where you need a build environment with CGO_ENABLED=1
. To deal with this in version 2, I used the Elastic golang-crossbuild Docker images to target different operating systems and architectures for releases.
Thankfully, I have since discovered a pure Go SQLite implementation that comes at an immaterial performance cost! That means no more CGO_ENABLED=1
, and easier/faster builds.
architecture
Every day is a school day, and I've learned a lot about Go in the years since gowitness was first released. Like any library in the Go ecosystem, if it's well-structured, anyone can technically include and use it in their own project. Unfortunately, gowitness v1 and v2 were not well-suited for this use case. I mean, you could have imported some gowitness code, but you really shouldn't have! :D
For version 3, I chose to adopt some of the project structure as described in this project-layout project. What's neat about this (apart from learning) is that now, with version 3, you should be able to import and use gowitness as a library in your own project.
Apart from the overall project structure, gowitness also underwent a significant restructuring of the codebase. There used to be one chrome
package that mostly had all the important bits. Now, there are a few new concepts which include:
- Drivers: Effectively the libraries that drive Google Chrome using CDP.
- Readers: Functions that read from various sources (files, nmap, nessus, etc.)
- Writers: Functions that write driver results somewhere (SQLite, JSON Lines, stdout, etc.)
- Runner: A core component that "runs" drivers, reading from readers and writing to writers.
These all make up the internals of how an end-to-end probe of a remote website will happen. All of the scan
commands use this runner pattern, including the web interface's "New Probe" feature.
scanner Drivers
Version 1 was basically the graduation of a bash script. It literally spawned a shell to run Chrome with the --headless
flag. What's even funnier is I started a small https-to-http proxy in a temporary goroutine to get around TLS-related errors too. Good times. It was scruffy, but in the places where I needed it (including that one project where this started), I got screenshots faster and more reliably than I'd ever been able to before!
In version 2 though, I learned about the Chrome DevTools Protocol (CDP) and discovered chromedp as a wrapper for Golang, making it possible to drive Google Chrome without handling all of the shell execution. Sure, you still need to launch Google Chrome, but the library took care of that. I also then learned about --ignore-certificate-errors
, which meant the death of my crappy proxy. Overall, using chromedp to do the heavy lifting worked out great. There were (still are?) definitely some bugs in v2, but the most painful issue was the "preflighter" concept that I used. Basically, instead of having Chrome browse to a URL that would inevitably fail (and thereby waste time), I had a simple (and cheap) Golang http.Client
perform a so-called preflight to determine if a URL was up. If it was, we'd continue to let Chrome browse there and take a screenshot. The problem though was that the preflighter would often fail (turns out, behaving like a browser can be hard). This meant that results would be incorrectly missed, and that wasn't great.
So, for version 3, I removed the preflighter. Now, Chrome is used for all requests, and results are recorded based on events emitted by CDP. This meant a significant increase in probing accuracy at an acceptable performance cost. Better, but not yet amazing. While spending time on this I discovered rod. Excitedly, I read the documentation and began experimenting. It was... very fast. Like, almost unbelievably fast. At this point, I decided it was time to ditch chromedp and move to rod. It's clearly the winner.
Unfortunately, that win did not last long. I used a cleaned-up version of the Tranco list (removing obvious porn, other NSFW content, etc.) to test scanning with, and found that most websites probed fine from a network perspective, but the screenshots would often fail for no obvious reason. More often than not, actually, and that began my deep dive into why. I'll give you the short version of a few late nights. It seems like if you use a single browser process with tabs, screenshots fail often. Use a fresh browser for each screenshot, and your accuracy goes up significantly. I honestly don't know why. Talking to @singe about this, he asked about per-process ulimits and whatnot, and maybe that could be it. Regardless, because of all the testing I was doing, switching between chromedp and go-rod, I ended up implementing what I'm now calling "drivers". With gowitness v3, you can use gorod as the scanning driver by setting --driver gorod
, with chromedp being the default still.
The question though is, what's the difference then? In both driver cases, when the tab strategy is used for screenshots, I'd reliably get poor screenshot accuracy. So, I changed chromedp to spawn a new browser window for each target and kept go-rod using a tabbed strategy. This means, for accuracy use the default driver, chromedp
, but for speed (or if resource usage is an issue), use gorod
.
readers
Instead of defining how source data should be read in the commands themselves, for version 3, I built the concept of readers which makes them easier to maintain and ultimately reusable and extendable! Readers all implement the same interface, which looks like this at the time of writing:
type Reader interface {
Read(chan<- string) error
}
The Read
method will receive a channel that accepts strings. This means that any reader, regardless of how it sources candidate URLs (i.e., a database, a file, a CIDR parser, Nessus, etc.) should ultimately write a string candidate (as a full, well-formed URL) to the channel. It's a simple concept but powerful when combined with a runner. When a runner is created, it prepares a Targets
channel that you can write to. Combining them means we'll have something like this (using the file reader as an example), where ...opts
is just to simplify the example:
// get a reader
runner, _ := runner.NewRunner(...opts)
reader := readers.NewFileReader(...fileopts)
go func() {
// Read will write to the runner.Targets channel
if err := reader.Read(runner.Targets); err != nil {
log.Error("error in reader.Read", "err", err)
return
}
}()
runner.Run()
runner.Close()
writers
Version 2 only had the ability to write to a SQLite database. Once written though, you could export the database to JSON. Continuing on the trend of building readers, I added the idea of writers. Like readers, writers implement the same interface and are slotted into a runner when started. At the time of writing, the interface looks like this:
// Writer is a results writer
type Writer interface {
Write(*models.Result) error
}
The Write
method accepts a Result, and it's up to the implementation to write it somewhere. For example, insert new rows into a database, write a JSON line, or just write to stdout. When a Runner
is started, you need to pass in a list of []writers.Writer
as an argument, which the runner will call when a driver returns a Result.
gowitness as a library
With all of these architecture changes, it means that using gowitness as a library is now actually supported! All you need is:
- A driver (I may make it possible to use a
nil
driver, which means the runner will default to chromedp) - A writer to persist results. This could be a default gowitness writer or your own. Just implement
writers.Writer
. - Optionally, a reader. This really depends on what you're doing. There are built-in ones, but you can bring your own. Just implement
readers.Reader
. - The runner. Glue your driver and writer together with a runner, and then feed the
runner.Targets
channel your results. Remember to close theTargets
channel when you're done; otherwise,runner.Run()
will block indefinitely.
A very simple, but complete, example to probe a single URL and write the results to a JSON lines file would be:
package main
import (
"log/slog"
"github.com/sensepost/gowitness/pkg/runner"
driver "github.com/sensepost/gowitness/pkg/runner/drivers"
"github.com/sensepost/gowitness/pkg/writers"
)
func main() {
logger := slog.Default()
// define scan/chrome/logging etc. options. drivers, scanners and writers use these.
// it includes concurrency options, where to save screenshots and more.
opts := runner.NewDefaultOptions()
// set any opts you want, or start from scratch with &runner.Options{}
// get the driver, the writer and a runner that glues it all together
driver, _ := driver.NewChromedp(logger, *opts)
writer, _ := writers.NewJsonWriter("results.jsonl")
runner, _ := runner.NewRunner(logger, driver, *opts, []writers.Writer{writer})
// with the runner up, you have runner.Targets which is a channel you can write targets to.
// write from a goroutine, as the runner's Run() method will wait for the `Target` channel to close.
// target "pusher" goroutine
go func() {
runner.Targets <- "https://sensepost.com"
close(runner.Targets)
}()
// finally, run the runner, when done, close it
runner.Run() // will block until runner.Targets is closed
runner.Close()
}
the web ui
Version 1 only had static report exports (but a simple API to submit new probe requests), but then in version 2 I added a reporting web server. The idea being that because we had a SQLite database, we could do some fun interactive things with the data. The web app leveraged standard html/template and tabler for a neat web interface. In retrospect, I really struggle with using html/template
. It's not necessarily the library's fault, it's just that I always struggle with inheritance and whatnot, having come from other templating engines.
The v2 web interface was hard to maintain though. It was even harder to add any interactive features given the design choice. So, having recently learned some React, and with ChatGPT and Vercel's v0 chat next to me, I managed to get a new web interface up and running that I think finally makes it look like it's from this era. It uses react-router and shadcn/ui for all the eye candy, but comes with the added benefit that there is now a real, documented API as well!
The web interface now has a lot of features too. A cleaner dashboard, the ability to search using free text or some operators like tech:
for technology-only searching, better sorting and filtering in the gallery view (i.e., by status code, etc.), as well as the ability to submit multiple URLs for probing using the "New Probe" menu.
in conclusion
There is just too much to talk about in so much detail, but I hope this gives you a small idea of what to expect for the version 3 release of gowitness!
changelog
For the real changelog, check out this pull request which is the merging of the v3 code base into the main branch.
new
- Added writers. Multiple writers can be used at once. For example,
--write-db
and--write-jsonl
will write results as they arrive to both a database (SQLite by default) and a JSON Lines file. - Add a CSV and stdout writer. The stdout writer will only write successful results to stdout. Useful for cases where you want to use gowitness in an existing shell pipeline. For example, you could pipe targets to gowitness and have only successful probes come out the other end with something like:
cat targets.txt | gowitness scan file -f - --writer-stdout | sort
. However, you could also add--writer-csv
if you wanted a CSV written to disk without breaking your shell pipeline. - Add database support for MySQL.
- Make screenshot formats configurable.
jpeg
is now the default, but you can revert back topng
's with--screenshot-format png
. - A new reporting web interface writtern in React, served as a single page application.
- Add a full, documented API. Assuming gowitness is running on your localhost, you can access the API documentation at http://localhost:7171/swagger/index.html
- Make it possible to drive an existing Google Chrome instance by defining
--chrome-wss-url
and passing a Chrome Dev Tools Protocol web socket URI. - Add the ability to save response bodies for network requests made for a target site. This needs to be enabled with the
--save-content
flag. WARNING: This flag can significantly increase storage sizes needed by writers. - Add a data format conversion command. You can now easily convert from and to SQLite and JSON Lines files.
- Add a database migrate command to migrate old, v2 gowitness databases to the new v3 schema. Keep in mind there are new data fields for gowitness v3, so your migrated database may not have all of the columns populated. That said, the reporting UI should work fine in these cases.
- The file scanning command now can also append ports to targets.
- Nmap scanning can now define explicit ports to ignore.
- Add ability to use rod as scanning driver using the
--driver gorod
flag. - In addition to being able to add JavaScript to be evaluated on pages using
--javascript
, you can now also specify a file using--javascript-file
for the same effect.
changed
- Refactor the internal probe pipeline to make use of "readers", "writers" and drivers tied together with a "runner".
- Restructure the gowitness to make it more simple to use as a library.
- Instead of hard failing when a screenshot failed, any results that have been gathered pre-screenshot will now be sent to writers.
- Chrome instances now use a temporary user data directory that is cleaned up when a browser exits.
- The command line got a splash of colour, while also being reorganised a little. The best way to see the changes is to try it out! Just add the
-h
/--help
flag anywhere to see extensive detail! There are many flags which might just do what you're hoping for. - Significantly simplify the release build Dockerfile and Makefile. It's no longer needed to have
CGO_ENABLED=1
, so cross-compilation works the way it's intended again! - Added windows/arm64 as build target
- Added build time to version information.
- The report list command got an overhaul, showing more information now. It can also list JSON Lines files now along with the previously supported database source.
- Database merging can now take both explicit databases by path, or a directory with databases, or both. The merge command also got rewritten from scratch and should be more reliable now.
- The global error handler is a lot prettier now :P
- All scanning commands moved to the
scan
subcommand. That includessingle
andfile
which are now underscan single
andscan file
. - Better resource usage using the chromedp scan driver. It's still high because we're spawning browser windows (see drivers for why), but much better.
- Add ability to store page cookies.
- The static report now exports a slightly more interactive report that shows both a grid and table view, in addition to some sorting features.
- Perception hashing now use hamming distances to group similar images. This means sorting by similar not only actually works, but is also included in the static data export.
fixed
- Ensure user specified windows sizes are used for screenshots.
- The
--screenshot-fullpage
now correctly applies toscan
subcommands. - Long filenames (created as a result of long URLs) are now truncated.
Github generated release notes:
What's Changed
- Fix Chinese display error by @yumusb in #206
- Fix typo ;) by @alileza in #217
- Fix typos in UI templates by @seqre in #219
- fix ipv6 address error by @irabva in #226
- add skip port option for nmap by @nikaiw in #227
- version 3 - a major rewrite 🙃 by @leonjza in #228
New Contributors
- @yumusb made their first contribution in #206
- @alileza made their first contribution in #217
- @seqre made their first contribution in #219
- @irabva made their first contribution in #226
- @nikaiw made their first contribution in #227
Full Changelog: 2.5.1...3.0.0
a534c782049a5e0b6d6786953bc77bcfd931b367 gowitness-3.0.0-darwin-amd64
9e4ff775c114b0e2e8e32e4823608178f5d81f6d gowitness-3.0.0-darwin-arm64
3a1564544d1c119ca32bdbe2cb46657c048d1e4a gowitness-3.0.0-linux-amd64
1a9983c350893a0aea1c9e840eef5718a3e0660a gowitness-3.0.0-linux-arm
ba11d3b5eeff7ed849338b3265b9e4ab409805af gowitness-3.0.0-linux-arm64
39fcbc368f2712391fa151f7222edd575df9b1ab gowitness-3.0.0-windows-amd64.exe
55ca444c5639ea4f8ac968d82506ce17d0c62af4 gowitness-3.0.0-windows-arm64.exe