uploading files
This commit is contained in:
parent
2a422aca73
commit
03a0ecd387
@ -0,0 +1 @@
|
||||
mirror of Dread's EndGameV3, a load balancer to scale out your .onion service operations
|
107
endgamefiles/README.md
Normal file
107
endgamefiles/README.md
Normal file
@ -0,0 +1,107 @@
|
||||
# ENDGAME V3
|
||||
|
||||
This is the third and most likely final version of EndGame. The most popular anti-ddos solution on the darknet.
|
||||
|
||||
EndGame is
|
||||
|
||||
- a front system designed to protect the core application servers on an onion service in a safe and private way.
|
||||
- locally complied and locally run (no trusted or middle party).
|
||||
- a combination of multiple different technologies working together in harmony (listed below).
|
||||
- FREE FOR ALL TO USE!
|
||||
- *arguably* magic ㄟ( ▔, ▔ )ㄏ
|
||||
|
||||
## Main Features
|
||||
|
||||
- Fully scripted and easily deploy-able (for mass scaling!) on blank Debian 11 systems.
|
||||
- Full featured NGINX LUA script to filter packets and provide a captcha directly using the NGINX layer.
|
||||
- Rate limiting via Tor's V3 onion service circuit ID system with secondary rate limiting based on a testcookie like system.
|
||||
- Easy Configuration for both local and remote (over Tor) front systems.
|
||||
- Easily configurable and change-able to meet an onion service's needs.
|
||||
- I2P support out of the box (using i2pd)
|
||||
- NEW hardening and compromise check processes (fail2ban, rkhunter, debsecan)
|
||||
- NEW captcha processes and a captcha built in rust with zero runtime dependencies!
|
||||
- Various updates and security improvements in the lua script (it now sends you back to the queue if you fail 3 captchas)
|
||||
- Caching of nginx modules for faster deployment!
|
||||
- LOTS of kernel tweaks for both hardening and more efficient memory allocation for Tor (not so much i2p)
|
||||
- Fresh html and css that makes it very clear you are using the latest endgame!
|
||||
- Includes an onionbalance process completely written in go built for high traffic sites (see it in the sourcecode folder)
|
||||
|
||||
It can also:
|
||||
- Cause you to grow a bigger dick than the asshole DDOSER (true *figurally*, lies *probably*)
|
||||
- Save you millions of dollars do to DDOSER's downing your site for ransom or for their extorting fees.
|
||||
- Make it look like you know what the fuck you are doing.
|
||||
|
||||
## How it works
|
||||
|
||||
EndGame is a FRONT system. That is to say it filters the requests that a service will receive, blocks bad requests, and only passes good ones to the application server.
|
||||
|
||||
At a request level it works like this:
|
||||
|
||||
`USER -> Tor/i2p -> Endgame Front -> Tor(optional) -> Backend (origin) Application Server`
|
||||
|
||||
*Endgame should be on a separate server to your backend server.* It only proxies content from your backend to the user. You will still need to configure your backend to handle requests from the Endgame Front.
|
||||
|
||||
This is the same system that anti-DDOS services like Cloudflare, Indusface, and Imperva use to protect websites from attacks. The difference is this is self-hosted and fully controlled by you for your own needs and made for darknet networks.
|
||||
|
||||
**On Tor, GoBalance (onionbalance) is central to really scale up protection and should be used with EndGame in production environments.**
|
||||
|
||||
What GoBalance does is take the various Endgame Front addresses and combine the descriptors together to create a distributed DNS round-robin like system on Tor. This allows for load balancing and prevents a single front from being overloaded. With GoBalance you can scale to hundreds of EndGame fronts that users can access from a single master onion (which we call in the configuration MASTERONION). The master onion is the address that GoBalance uses to sign and publish to the Tor network.
|
||||
|
||||
If you want to learn more about how GoBalance works go and read the [onionbalance documentation](https://onionbalance.readthedocs.io/en/latest/index.html). GoBalance is an improved fork of it written in go. To learn more about what makes Gobalance different go into the sourcecode directory and open the GoBalance folder.
|
||||
|
||||
You can use Endgame without Gobalance (or onionbalance) but the protection would be limited by the single EndGame front.
|
||||
|
||||
## Setup Process
|
||||
|
||||
If you want to use Gobalance, so you can load balance the requests coming in and get the real scalable protection, follow the parts below. If you don't skip to step 3 after setup 1.
|
||||
|
||||
1. [Download the Latest EndGame Source from Dread](http://dreadytofatroptsdj6io7l3xptbet6onoyno2yv7jicoxknyazubrad.onion/d/endgame). Verify the archive signature and that it matches what is signed by /u/Paris. Extract the archive to your local machine. DO NOT BLINDLY USE ENDGAME FROM A RANDOM GITHUB REPO YOU FOUND. DON'T BE STUPID.
|
||||
2. Go to sourcecode/gobalance and build gobalance with [go](https://go.dev). Read the README.md about how to compile and generate the gobalance configuration. With that configuration you will be able to see your MASTERONION url. The starting before .key is your master onion address. You will use that as your MASTERONION in the EndGame.config ending it with '.onion'.
|
||||
3. Open up and edit the endgame.config, you will need to change your TORAUTHPASSWORD. Change it to a random alphanumeric password of your choice. This is just used for authentication on nginx's layer to send circuit kill commands.
|
||||
4. You have two options for how EndGame sends the traffic to your backend. You can have it direct it to an onion address, or you can have it locally proxy to a server on the same network.
|
||||
1. Tor Proxy: You will need to set both of the BACKENDONION variables to your main onion service you want protected. This means your origin application server needs to have tor running with its own onion service address. You put that onion address on the BACKENDONION(1/2). If you have multiple backends (highly recommended) you can put different backend addresses to have load balancing and fallover. It's easy to add in even more by customizing endgame for your needs.
|
||||
2. Local Proxy: Change LOCALPROXY to true and edit the PROXYPASSURL to the specific IP or hostname of your backend location. It will default to connect on port 80 via http but you can edit line 320 of the site.conf to change that to your specific needs.
|
||||
5. Enable I2PSETUP and/or TORSETUP by setting them to true. You can also enable TORINTRODEFENSE and TORPOWDEFENSE to provide more protection against introduction attacks on the Tor network.
|
||||
6. Edit KEY and SALT to a secure cookie value. PROTECT THESE VALUES. If they get leaked, an attacker could generate EndGame cookies and hurt your EndGame protection.
|
||||
1. KEY: is your encryption key used for encryption. It should be to be between 68 and 128 random alphanumeric characters.
|
||||
2. SALT: is your salt for the encryption key. It must be exactly 8 alphanumeric characters.
|
||||
7. Branding is important. EndGame makes it easy to use your own branding on it. By default, it will use dread's branding, but you should change it.
|
||||
1. HEXCOLOR and HEXCOLORDARK are for the specific colors used on the pages. Set HEXCOLOR to your main site color and HEXCOLORDARK to just a slightly darker version of it.
|
||||
2. SITENAME, SITETAGLINE, SITESINCE is all information about your site. Self-explanatory.
|
||||
3. FAVICON is used as your site's favicon in base64. This limits the amount of requests a browser may do when first loading the queue page. Make sure this value is set to something. Otherwise people's connections will get cut off from the queue when their browser makes a request to the favicon.ico.
|
||||
4. SQUARELOGO is used as the icon for the queue running man and the main splash logo on the captcha page. In base64 format.
|
||||
5. NETWORKLOGO is used as a bottom network icon for on the captcha page which allows different sites a part of the same organization to be shown. In base64 format.
|
||||
8. After you are done EndGame's configuration, you should archive everything except the sourcecode folder. Transfer the archive to a blank debian 12 system. As root, extract the archive and run setup.sh like './setup.sh'. At the end of the setup, it will export an onion address (and i2p if set but don't add that to gobalance) which you can provide to users or add to your gobalance configuration.
|
||||
9. Go out into the world knowing your service is protected by the best and most tested anti-DDOS solution for the darknet.
|
||||
|
||||
### Tech Overview
|
||||
|
||||
EndGame uses a number of open-source projects (and libraries) to work properly.
|
||||
|
||||
Projects:
|
||||
* [NGINX](https://NGINX.org/) - NGINX! A web server *obviously* to provide the packet handling, threading, and proxying.
|
||||
* [Tor](https://www.torproject.org/) - Tor is free and open-source software for enabling anonymous communication. It's awesome and makes all this possible.
|
||||
* [STEM](https://stem.torproject.org/) - A python controller for Tor.
|
||||
* [NYX](https://nyx.torproject.org/) - A command-line monitor for Tor (to easily check the EndGame front's Tor process.
|
||||
* [GoBalance](http://yylovpz7taca7jfrub3wltxabzzjp34fngj5lpwl6eo47ekt5cxs6mid.onion/n0tr1v/gobalance) - A distributed DNS round-robin like system on Tor to allow load-balancing and eliminate single points of failure.
|
||||
* [OpenSSL](https://www.openssl.org/) - A dependency for a lot of this projects and libraries.
|
||||
* [Socat](http://www.dest-unreach.org/socat/) - Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. (used for backend tor proxying)
|
||||
|
||||
Hardening Projects:
|
||||
* [Fail2ban](https://www.fail2ban.org/) - A set of server and client programs to limit brute force authentication attempts. (automatically configured)
|
||||
* [Rkhunter](http://rkhunter.sourceforge.net/) - rkhunter is a shell script which carries out various checks on the local system to try and detect known rootkits and malware.
|
||||
* [Chkrootkit](https://www.chkrootkit.org/) - chkrootkit is a tool to locally check for signs of a rootkit.
|
||||
|
||||
NGINX Modules:
|
||||
* [NAXSI](https://github.com/nbs-system/naxsi) - A high performance web application firewall for NGINX.
|
||||
* [Headers More](https://github.com/openresty/headers-more-NGINX-module) - A module for better control of headers in NGINX.
|
||||
* [Echo NGINX](https://github.com/openresty/echo-nginx-module) - A NGINX module which allows shell style commands in the NGINX configuration file.
|
||||
* [LUA NGINX](https://github.com/openresty/lua-nginx-module) - The power of LUA into NGINX via a module. This allows all the scripting, packet filtering, and captcha functionality EndGame does.
|
||||
* [NGINX Development Kit](https://github.com/vision5/ngx_devel_kit) - Development Kit for NGINX (dependency)
|
||||
|
||||
Libraries:
|
||||
* [LUAJIT2 NGINX](https://github.com/openresty/luajit2) - Just in time compiler for LUA.
|
||||
* [LUA Resty String](https://github.com/openresty/lua-resty-string) - String functions for ngx_lua and LUAJIT2
|
||||
* [LUA Resty Cookie](https://github.com/cloudflare/lua-resty-cookie) - Provides cookie manipulation
|
||||
* [LUA Resty Session](https://github.com/bungle/lua-resty-session) - Provides session manipulation
|
||||
* [LUA Resty AES](https://github.com/c64bob/lua-resty-aes/raw/master/lib/resty/aes_functions.lua) - AES Functions file for LUA. Used for shared session cookies.
|
11
endgamefiles/aptpreferences
Normal file
11
endgamefiles/aptpreferences
Normal file
@ -0,0 +1,11 @@
|
||||
Package: *
|
||||
Pin: release a=bullseye
|
||||
Pin-Priority: 500
|
||||
|
||||
Package: linux-image-amd64
|
||||
Pin: release a=unstable
|
||||
Pin-Priority: 1000
|
||||
|
||||
Package: *
|
||||
Pin: release a=unstable
|
||||
Pin-Priority: 100
|
1
endgamefiles/dependencies/echo-nginx-module
Submodule
1
endgamefiles/dependencies/echo-nginx-module
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit 7fddb5b082c6382dd15b6d8ddbf2ccb2a490aafd
|
@ -0,0 +1 @@
|
||||
Subproject commit 607d1b1f32abc3de5a26deeeb827de19c1e842b9
|
1
endgamefiles/dependencies/lua-nginx-module
Submodule
1
endgamefiles/dependencies/lua-nginx-module
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit c89469e920713d17d703a5f3736c9335edac22bf
|
1
endgamefiles/dependencies/lua-resty-cookie
Submodule
1
endgamefiles/dependencies/lua-resty-cookie
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit f418d77082eaef48331302e84330488fdc810ef4
|
1
endgamefiles/dependencies/lua-resty-session
Submodule
1
endgamefiles/dependencies/lua-resty-session
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit 5f2aed616d16fa7ca04dc40e23d6941740cd634d
|
1
endgamefiles/dependencies/lua-resty-string
Submodule
1
endgamefiles/dependencies/lua-resty-string
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit e6b80ac31dd9ff26bf444e50f5d7bda1089f972b
|
1
endgamefiles/dependencies/luajit2
Submodule
1
endgamefiles/dependencies/luajit2
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit e598aeb7426dbc069f90ba70db9bce43cd573b0e
|
1
endgamefiles/dependencies/naxsi
Submodule
1
endgamefiles/dependencies/naxsi
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit d714f1636ea49a9a9f4f06dba14aee003e970834
|
1
endgamefiles/dependencies/ngx_devel_kit
Submodule
1
endgamefiles/dependencies/ngx_devel_kit
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit b4642d6ca01011bd8cd30b253f5c3872b384fd21
|
43
endgamefiles/endgame.config
Normal file
43
endgamefiles/endgame.config
Normal file
@ -0,0 +1,43 @@
|
||||
#This area
|
||||
|
||||
#OPTIONS!
|
||||
MASTERONION="dreadytofatroptsdj6io7l3xptbet6onoyno2yv7jicoxknyazubrad.onion"
|
||||
TORAUTHPASSWORD="authpassword"
|
||||
BACKENDONION1="biblemeowimkh3utujmhm6oh2oeb3ubjw2lpgeq3lahrfr2l6ev6zgyd.onion"
|
||||
BACKENDONION2="biblemeowimkh3utujmhm6oh2oeb3ubjw2lpgeq3lahrfr2l6ev6zgyd.onion"
|
||||
|
||||
#set to true if you want to setup local proxy instead of proxy over Tor
|
||||
LOCALPROXY=false
|
||||
PROXYPASSURL="10.10.10.0"
|
||||
|
||||
#reboot after completion. Highly recommended to get the new kernel active.
|
||||
REBOOT=true
|
||||
|
||||
#set to true if you want i2pd installed and setup
|
||||
I2PSETUP=false
|
||||
|
||||
#set to true if you want tor installed and setup
|
||||
TORSETUP=true
|
||||
|
||||
#enable Tor introduction defense. Keeps the Tor process from stalling but hurts reliability. Only recommended if running on low powered fronts.
|
||||
TORINTRODEFENSE=false
|
||||
|
||||
#enable Tor POW introduction defense. This should be enabled!
|
||||
TORPOWDEFENSE=true
|
||||
|
||||
#Shared Front Captcha Key. Key should be alphanumeric between 64-128. Salt needs to be exactly 8 chars.
|
||||
KEY="encryption_key"
|
||||
SALT="1saltkey"
|
||||
#session length is in seconds. Default is 12 hours.
|
||||
SESSION_LENGTH=43200
|
||||
|
||||
#CSS Branding
|
||||
|
||||
HEXCOLOR="9b59b6"
|
||||
HEXCOLORDARK="713C86"
|
||||
SITENAME="dread"
|
||||
SITETAGLINE="the frontpage of the dark net"
|
||||
SITESINCE="2018"
|
||||
FAVICON="data:image/x-icon;base64,AAABAAEAEBAAAAEAIABoBAAAFgAAACgAAAAQAAAAIAAAAAEAIAAAAAAAAAQAABMLAAATCwAAAAAAAAAAAACtRI7/rUSO/61Ejv+tRI7/rUSO/61Fjv+qPor/pzaG/6k7if+sQo3/qDiH/6g4h/+sQ43/rUSO/61Ejv+tRI7/rUSO/61Ejv+tRI7/rUSO/61Fjv+sQo3/uV6e/8iBs/+9aaT/sEyT/8V7r//Feq//sEqS/6xDjf+tRI7/rUSO/61Ejv+tRI7/rUSO/65Fj/+vR5D/rEGM/+fI3v///////fv8/+/a6f/+/f7/+vT4/7Zam/+rP4v/rkWP/61Ejv+tRI7/rUSO/61Fjv+sQYz/qTqI/6g4h//hudX/5sXc/+7Z6P////////7///ft9P+2WZr/q0CL/61Fj/+tRI7/rUSO/61Fj/+rQIv/uFyd/82Ou//Njrv/uWGf/6g6iP+uR5D/5sbc///////47vX/tlma/6s/i/+tRY//rUSO/61Ejv+uRo//qDqI/9aix///////69Hj/61Ejv+vSJD/qTqI/8BvqP//////+O/1/7ZZmv+rP4v/rUWP/61Ejv+tRI7/rkaP/6k8if/fttP//////9ekyP+oOIf/sEuS/6tAi/+7ZKH//vv9//nw9v+2WJr/qz+L/61Fj/+tRI7/rUSO/65Gj/+oOoj/1qHG///////pzeH/qj6K/6o8if+lMoP/0pjB///////47vX/tlma/6s/i/+tRY//rUSO/61Ejv+uRo//qj2K/7xmo//8+Pv//////+G61f+8ZqP/zpC8//v2+v//////+O/1/7ZZmv+rP4v/rUWP/61Ejv+tRI7/rUSO/65Gj/+pPIn/zo+7//79/v///////////////////v////////jw9v+2WZr/qz+L/61Fj/+tRI7/rUSO/61Ejv+tRI7/rUWP/6o9iv/Ab6j/37bT/+vR4//kwdr/16XI//36/P/58ff/tlma/6s/i/+tRY//rUSO/61Ejv+tRI7/rUSO/61Ejv+uRo//qj2K/6o9if+tRY7/qDmH/7VYmv/9+fv/+fH3/7ZYmv+rP4v/rUWP/61Ejv+tRI7/rUSO/61Ejv+tRI7/rUSO/65Gj/+uRo//rkaP/6s/i/+6Y6H//Pf6//ju9f+1WJr/q0CL/61Fj/+tRI7/rUSO/61Ejv+tRI7/rUSO/61Ejv+tRI7/rUSO/65Gj/+qPor/umOh//79/v/69Pj/tlqb/6s/i/+uRY//rUSO/61Ejv+tRI7/rUSO/61Ejv+tRI7/rUSO/61Ejv+tRI7/rEKN/7FNk//GfLD/xHmu/7BKkv+sQ43/rUSO/61Ejv+tRI7/rUSO/61Ejv+tRI7/rUSO/61Ejv+tRI7/rUSO/61Ejv+sQo3/qDiH/6g4h/+sQ43/rUSO/61Ejv+tRI7/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=="
|
||||
SQUARELOGO="data:image/webp;base64,UklGRkwCAABXRUJQVlA4WAoAAAAQAAAATwAATwAAQUxQSC8AAAABH6CobRuIP9xev3ajERHxDWAqkg1PshdNnqeZJBfAR0T/J2ADngp0E/PbifAGPABWUDgg9gEAAFAMAJ0BKlAAUAA+bS6URiQioiEuGikggA2JQBoZQmgR/d8Cdib6gNsB5gPNy04Dej0a2NXoYw8XMJObQmvWHz52jkOgFebAO+W2C3pjVzORG7gs9ssx8qnjo3F96nSITr1LFsVBFyXhL7ywAP7cUf/iHNu+tvSh+rohhPqZvzMSRXtv7e8U/Dh5LwZIJHvcIx9GmDQsAejcJZBn+c5L63sC8fmQQAde+N776pYSR99TxW58l33OS2vwEbv0MLQQeKEkfOYxulikHr7tl/6ZwLnjENpEv6OnhDWVW53x32zxICdEDZ9TnGenzgbr1pGHJwZ3LX7o1h0RQkYVBag7IsYd+buU+m5wgCohMPbEwS+Vi02J7tVFAvUPVW5VfdjGFbzTfD5g/+nMoacT15PcUWxFNYCacgap7Zh5g0OaAa1rGJBuVbsFrGbbp5C85U3y+OuVTJcUt2wPNQJPA4lpjOyM5wGGRUxrjwy/+cOaOkfLlYc2ImOJLSmchU0u727olq8AMLRMZc/queUbr0E5ec/vKzZb9Z00D8dh5Hk6XNob6WDIFIusbW9UiKlnDFOz6ewFp4sONEBGGtI3gWG86cC+hmvHDzYlWupLMc2pRS9aO6FRiFAPmQXiJyK+lCZPtevTMxcUM4a6g3otDsiMGAOMbvIgAAA="
|
||||
NETWORKLOGO="data:image/webp;base64,UklGRpgFAABXRUJQVlA4WAoAAAAQAAAANgAAOwAAQUxQSCMFAAABoLBt2zE3HtddZhone8SubSt2bdu27a7rNqjtJlnvJqljJ7Xbqd1o0Ps4dt7v+6aTiIDgRpIiybnMu3cFb4DxaXs0+0hbmD4tbpO81cLkXLNJklmuptaLYnqa2kKJRaZVddJ9iQeTqppQRAKZK8glEyJMpdZeDdM6zhTM7phC7b66plB5/S1qFzqbFb5KYcqrQjOnOaW880P1bzYmi4xpDERz2VROWcoYoOFuMmfyt2n7z01m+KmAEP1d1TAON7urDwXMulyjLrmr8Vy2FjNjpB0AVTIXYCEXYj6v2QCwHaxmaZSnIisLAHaT1fz8swMAYBLzLLCcy2GRol0GALDf9ImvZtgDsLQSOUyJO9XHKaCAxfHNIMT9EfsBy7gMaDw5GGIaxxXzRqBTnzMJ0xwBuCaT1KVqXv8RDoh2MwrAAs4DKqlUlrbWlQEAYb+/0qTqSF5yQ41fKeRdf1SxEVI1sPRjHQvLauv4PTDq/Mlj5481h6WDjU0V9HlHIdvN/Z6KHsecSExKTExMunj4Lj+f/+vvcw+oDkY0STIM268lJiWdjH4kUgf0/kwxn3QUoiNZQjFzsfYryddtkCd8pE8Ufenrniyxwa1FSGhoWEDoQy4NCA4N9jvAs07YaPDGHx36h4a0cNsgcdkTAeI9VG9tAAAYW3IFQmZyGiTEF2ywVc23giAAkWkk74/N5ftYb6BJCTuI1nO9lOHjece+Y/bY+yRTIwEAAfr8ON0UxwU63p2NGEZD1HpRa2CD5MRs1V1yieMUfUK+PgCC+l8TPDOL66JuDHUF5QW1IZvVeoNOOETuq4d6xdk+CV/rS5oj0zGDF1RQtbtEFnkA3/XaGBMbtSsqemsmDdriwa0OKqjiOAvpbCFVaFOjgGMArODnXVXNRt4o+0pJJJ7FABjLAnPrAjkFLhjFO45o9FrbCxa7SVJBB2RqGsPpLsfAOV9BlctchyiuRvVYGsEPUYzCel6ppgxdy9UztU+8MY+C4luFahnBaPBaM1ut6QojYBvJGaj9WnCif22PTqufSgQBc0jugFFsiphcAxtpsBRC2tyRUSmRN62N9g/MrxschCQdiuXZwNgrjys3fEXySyNIc0xqLsntgFEPefHjf/AlyWvmMiZqZB4y65mmK4x6oQ0WIxBicNZSRv8vci+0zqgXGsW7jlAh3CBBzrBiuX8zxzsc/T/Ff+qxAOBfRvKOs4xVOvGf+mkMgDGSf2r5EopXwcbf1vsBSU6TUqVQdF8ooQuciXS2kC/Y+qhzhEOwPuPs6SvbbCWWayhXsPVRtzjLK15asIH6/Dj9NMf5WhaGQojNTz2rAnBfpiUV20N8nqQ9dE8n+XB8Dt/H+kBkls+dU8YuEopcvhnljnsgbUYh8q1PKl6u8pRan9jXA+B1Rb7RhgxuJfhdwWs/xUab7N73i1JbjwMAVbqCD10U2/rnXoFq0aNoYYhcX25QbXc55eOrOESe+FpsF73tK4wsRwsAgEXbrn6+XX39/f39fH0jQ2wUR9YvKrhdIqlL0bz+IwyS1J7YBEpRGpBJLgAcpyWc7uMYWMDiuKaiRq9e9gVG5Y8CeuSeMVcaxyfjJjsYO/yrzmGeBdZwDSxyuLqydPi/nGYPwMIKSnHbUcwXwqphJq4aizCfl8wlq0bJLncYnfb/aoxebLRJnU25Ri1hNNBwF5kzsaKWtgpaEStoITXp+vugQtZfAIsqZtkGelbUau+aJch0hanT4ibJG81h+rQ5kn2kDYwGAFZQOCBOAAAA0AQAnQEqNwA8AD5tLJJFpCKhmAQAQAbEtIAASD/dKxnoTlaUzRT/Q9kkRNlNuAAA/TH//9wDj/sFX/91UzfhNf+E8DE//4h3B/k1wAAA"
|
45
endgamefiles/getdependencies.sh
Normal file
45
endgamefiles/getdependencies.sh
Normal file
@ -0,0 +1,45 @@
|
||||
#!/bin/bash
|
||||
|
||||
shopt -s nullglob dotglob
|
||||
directory=(dependencies/*)
|
||||
if [ ${#directory[@]} -gt 0 ]; then
|
||||
read -p "Found Dependency Directory. Did you want to wipe? (y/n) " -n 1 -r
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]
|
||||
then
|
||||
rm -R dependencies
|
||||
echo "Starting Resync"
|
||||
else
|
||||
echo "Cancelled Sync"
|
||||
exit 0
|
||||
fi
|
||||
fi
|
||||
|
||||
apt-get update
|
||||
apt-get -y install git
|
||||
|
||||
mkdir dependencies
|
||||
cd dependencies
|
||||
|
||||
git clone https://github.com/nbs-system/naxsi.git
|
||||
git clone https://github.com/openresty/headers-more-nginx-module.git
|
||||
git clone https://github.com/openresty/echo-nginx-module.git
|
||||
|
||||
#some required stuff for lua/luajit. Versions should be checked and updated with every install/update or nginx won't boot!
|
||||
git clone https://github.com/openresty/lua-nginx-module
|
||||
|
||||
git clone https://github.com/openresty/luajit2
|
||||
cd luajit2
|
||||
git checkout v2.1-agentzh
|
||||
cd ..
|
||||
|
||||
git clone https://github.com/vision5/ngx_devel_kit
|
||||
|
||||
git clone https://github.com/openresty/lua-resty-string
|
||||
|
||||
git clone https://github.com/cloudflare/lua-resty-cookie
|
||||
|
||||
git clone https://github.com/bungle/lua-resty-session
|
||||
|
||||
clear
|
||||
echo "Dependencies have been got!"
|
||||
exit 0
|
284
endgamefiles/i2pd.conf
Normal file
284
endgamefiles/i2pd.conf
Normal file
@ -0,0 +1,284 @@
|
||||
## Configuration file for a typical i2pd user
|
||||
## See https://i2pd.readthedocs.io/en/latest/user-guide/configuration/
|
||||
## for more options you can use in this file.
|
||||
|
||||
## Lines that begin with "## " try to explain what's going on. Lines
|
||||
## that begin with just "#" are disabled commands: you can enable them
|
||||
## by removing the "#" symbol.
|
||||
|
||||
## Tunnels config file
|
||||
## Default: ~/.i2pd/tunnels.conf or /var/lib/i2pd/tunnels.conf
|
||||
# tunconf = /var/lib/i2pd/tunnels.conf
|
||||
|
||||
## Tunnels config files path
|
||||
## Use that path to store separated tunnels in different config files.
|
||||
## Default: ~/.i2pd/tunnels.d or /var/lib/i2pd/tunnels.d
|
||||
# tunnelsdir = /var/lib/i2pd/tunnels.d
|
||||
|
||||
## Path to certificates used for verifying .su3, families
|
||||
## Default: ~/.i2pd/certificates or /var/lib/i2pd/certificates
|
||||
# certsdir = /var/lib/i2pd/certificates
|
||||
|
||||
## Where to write pidfile (default: i2pd.pid, not used in Windows)
|
||||
# pidfile = /run/i2pd.pid
|
||||
|
||||
## Logging configuration section
|
||||
## By default logs go to stdout with level 'info' and higher
|
||||
## For Windows OS by default logs go to file with level 'warn' and higher
|
||||
##
|
||||
## Logs destination (valid values: stdout, file, syslog)
|
||||
## * stdout - print log entries to stdout
|
||||
## * file - log entries to a file
|
||||
## * syslog - use syslog, see man 3 syslog
|
||||
# log = file
|
||||
## Path to logfile (default - autodetect)
|
||||
# logfile = /var/log/i2pd/i2pd.log
|
||||
## Log messages above this level (debug, info, *warn, error, none)
|
||||
## If you set it to none, logging will be disabled
|
||||
# loglevel = warn
|
||||
## Write full CLF-formatted date and time to log (default: write only time)
|
||||
# logclftime = true
|
||||
|
||||
## Daemon mode. Router will go to background after start. Ignored on Windows
|
||||
# daemon = true
|
||||
|
||||
## Specify a family, router belongs to (default - none)
|
||||
# family =
|
||||
|
||||
## Network interface to bind to
|
||||
## Updates address4/6 options if they are not set
|
||||
# ifname =
|
||||
## You can specify different interfaces for IPv4 and IPv6
|
||||
# ifname4 =
|
||||
# ifname6 =
|
||||
|
||||
## Local address to bind transport sockets to
|
||||
## Overrides host option if:
|
||||
## For ipv4: if ipv4 = true and nat = false
|
||||
## For ipv6: if 'host' is not set or ipv4 = true
|
||||
# address4 =
|
||||
# address6 =
|
||||
|
||||
## External IPv4 or IPv6 address to listen for connections
|
||||
## By default i2pd sets IP automatically
|
||||
## Sets published NTCP2v4/SSUv4 address to 'host' value if nat = true
|
||||
## Sets published NTCP2v6/SSUv6 address to 'host' value if ipv4 = false
|
||||
# host = 1.2.3.4
|
||||
|
||||
## Port to listen for connections
|
||||
## By default i2pd picks random port. You MUST pick a random number too,
|
||||
## don't just uncomment this
|
||||
# port = 4567
|
||||
|
||||
## Enable communication through ipv4
|
||||
ipv4 = true
|
||||
## Enable communication through ipv6
|
||||
ipv6 = false
|
||||
|
||||
## Enable SSU transport
|
||||
ssu = false
|
||||
|
||||
## Bandwidth configuration
|
||||
## L limit bandwidth to 32KBs/sec, O - to 256KBs/sec, P - to 2048KBs/sec,
|
||||
## X - unlimited
|
||||
## Default is L (regular node) and X if floodfill mode enabled. If you want to
|
||||
## share more bandwidth without floodfill mode, uncomment that line and adjust
|
||||
## value to your possibilities
|
||||
# bandwidth = L
|
||||
## Max % of bandwidth limit for transit. 0-100. 100 by default
|
||||
# share = 100
|
||||
|
||||
## Router will not accept transit tunnels, disabling transit traffic completely
|
||||
## (default = false)
|
||||
# notransit = true
|
||||
|
||||
## Router will be floodfill
|
||||
## Note: that mode uses much more network connections and CPU!
|
||||
# floodfill = true
|
||||
|
||||
[ntcp2]
|
||||
## Enable NTCP2 transport (default = true)
|
||||
# enabled = true
|
||||
## Publish address in RouterInfo (default = true)
|
||||
# published = true
|
||||
## Port for incoming connections (default is global port option value)
|
||||
# port = 4567
|
||||
|
||||
[ssu2]
|
||||
## Enable SSU2 transport
|
||||
enabled = true
|
||||
## Publish address in RouterInfo
|
||||
published = true
|
||||
## Port for incoming connections (default is global port option value or port + 1 if SSU is enabled)
|
||||
# port = 4567
|
||||
|
||||
[http]
|
||||
## Web Console settings
|
||||
## Uncomment and set to 'false' to disable Web Console
|
||||
enabled = true
|
||||
## Address and port service will listen on
|
||||
address = 127.0.0.1
|
||||
port = 7070
|
||||
## Path to web console, default "/"
|
||||
# webroot = /
|
||||
## Uncomment following lines to enable Web Console authentication
|
||||
## You should not use Web Console via public networks without additional encryption.
|
||||
## HTTP authentication is not encryption layer!
|
||||
# auth = true
|
||||
# user = i2pd
|
||||
# pass = changeme
|
||||
## Select webconsole language
|
||||
## Currently supported english (default), afrikaans, armenian, chinese, czech, french,
|
||||
## german, italian, polish, portuguese, russian, spanish, turkish, turkmen, ukrainian
|
||||
## and uzbek languages
|
||||
# lang = english
|
||||
|
||||
[httpproxy]
|
||||
## Uncomment and set to 'false' to disable HTTP Proxy
|
||||
enabled = false
|
||||
## Address and port service will listen on
|
||||
#address = 127.0.0.1
|
||||
#port = 4444
|
||||
## Optional keys file for proxy local destination
|
||||
# keys = http-proxy-keys.dat
|
||||
## Enable address helper for adding .i2p domains with "jump URLs" (default: true)
|
||||
## You should disable this feature if your i2pd HTTP Proxy is public,
|
||||
## because anyone could spoof the short domain via addresshelper and forward other users to phishing links
|
||||
# addresshelper = true
|
||||
## Address of a proxy server inside I2P, which is used to visit regular Internet
|
||||
# outproxy = http://false.i2p
|
||||
## httpproxy section also accepts I2CP parameters, like "inbound.length" etc.
|
||||
|
||||
[socksproxy]
|
||||
## Uncomment and set to 'false' to disable SOCKS Proxy
|
||||
enabled = false
|
||||
## Address and port service will listen on
|
||||
#address = 127.0.0.1
|
||||
#port = 4447
|
||||
## Optional keys file for proxy local destination
|
||||
# keys = socks-proxy-keys.dat
|
||||
## Socks outproxy. Example below is set to use Tor for all connections except i2p
|
||||
## Uncomment and set to 'true' to enable using of SOCKS outproxy
|
||||
# outproxy.enabled = false
|
||||
## Address and port of outproxy
|
||||
# outproxy = 127.0.0.1
|
||||
# outproxyport = 9050
|
||||
## socksproxy section also accepts I2CP parameters, like "inbound.length" etc.
|
||||
|
||||
[sam]
|
||||
## Comment or set to 'false' to disable SAM Bridge
|
||||
enabled = false
|
||||
## Address and ports service will listen on
|
||||
# address = 127.0.0.1
|
||||
# port = 7656
|
||||
# portudp = 7655
|
||||
|
||||
[bob]
|
||||
## Uncomment and set to 'true' to enable BOB command channel
|
||||
# enabled = false
|
||||
## Address and port service will listen on
|
||||
# address = 127.0.0.1
|
||||
# port = 2827
|
||||
|
||||
[i2cp]
|
||||
## Uncomment and set to 'true' to enable I2CP protocol
|
||||
# enabled = false
|
||||
## Address and port service will listen on
|
||||
# address = 127.0.0.1
|
||||
# port = 7654
|
||||
|
||||
[i2pcontrol]
|
||||
## Uncomment and set to 'true' to enable I2PControl protocol
|
||||
# enabled = false
|
||||
## Address and port service will listen on
|
||||
# address = 127.0.0.1
|
||||
# port = 7650
|
||||
## Authentication password. "itoopie" by default
|
||||
# password = itoopie
|
||||
|
||||
[precomputation]
|
||||
## Enable or disable elgamal precomputation table
|
||||
## By default, enabled on i386 hosts
|
||||
# elgamal = true
|
||||
|
||||
[upnp]
|
||||
## Enable or disable UPnP: automatic port forwarding (enabled by default in WINDOWS, ANDROID)
|
||||
# enabled = false
|
||||
## Name i2pd appears in UPnP forwardings list (default = I2Pd)
|
||||
# name = I2Pd
|
||||
|
||||
[meshnets]
|
||||
## Enable connectivity over the Yggdrasil network
|
||||
# yggdrasil = false
|
||||
## You can bind address from your Yggdrasil subnet 300::/64
|
||||
## The address must first be added to the network interface
|
||||
# yggaddress =
|
||||
|
||||
[reseed]
|
||||
## Options for bootstrapping into I2P network, aka reseeding
|
||||
## Enable or disable reseed data verification.
|
||||
verify = true
|
||||
## URLs to request reseed data from, separated by comma
|
||||
## Default: "mainline" I2P Network reseeds
|
||||
# urls = https://reseed.i2p-projekt.de/,https://i2p.mooo.com/netDb/,https://netdb.i2p2.no/
|
||||
## Reseed URLs through the Yggdrasil, separated by comma
|
||||
# yggurls = http://[324:9de3:fea4:f6ac::ace]:7070/
|
||||
## Path to local reseed data file (.su3) for manual reseeding
|
||||
# file = /path/to/i2pseeds.su3
|
||||
## or HTTPS URL to reseed from
|
||||
# file = https://legit-website.com/i2pseeds.su3
|
||||
## Path to local ZIP file or HTTPS URL to reseed from
|
||||
# zipfile = /path/to/netDb.zip
|
||||
## If you run i2pd behind a proxy server, set proxy server for reseeding here
|
||||
## Should be http://address:port or socks://address:port
|
||||
# proxy = http://127.0.0.1:8118
|
||||
## Minimum number of known routers, below which i2pd triggers reseeding. 25 by default
|
||||
# threshold = 25
|
||||
|
||||
[addressbook]
|
||||
## AddressBook subscription URL for initial setup
|
||||
## Default: reg.i2p at "mainline" I2P Network
|
||||
# defaulturl = http://shx5vqsw7usdaunyzr2qmes2fq37oumybpudrd4jjj4e4vk4uusa.b32.i2p/hosts.txt
|
||||
## Optional subscriptions URLs, separated by comma
|
||||
# subscriptions = http://reg.i2p/hosts.txt,http://identiguy.i2p/hosts.txt,http://stats.i2p/cgi-bin/newhosts.txt,http://rus.i2p/hosts.txt
|
||||
|
||||
[limits]
|
||||
## Maximum active transit sessions (default: 5000)
|
||||
## This value is doubled if floodfill mode is enabled!
|
||||
# transittunnels = 5000
|
||||
## Limit number of open file descriptors (0 - use system limit)
|
||||
# openfiles = 0
|
||||
## Maximum size of corefile in Kb (0 - use system limit)
|
||||
# coresize = 0
|
||||
|
||||
[trust]
|
||||
## Enable explicit trust options. false by default
|
||||
# enabled = true
|
||||
## Make direct I2P connections only to routers in specified Family.
|
||||
# family = MyFamily
|
||||
## Make direct I2P connections only to routers specified here. Comma separated list of base64 identities.
|
||||
# routers =
|
||||
## Should we hide our router from other routers? false by default
|
||||
# hidden = true
|
||||
|
||||
[exploratory]
|
||||
## Exploratory tunnels settings with default values
|
||||
# inbound.length = 2
|
||||
# inbound.quantity = 3
|
||||
# outbound.length = 2
|
||||
# outbound.quantity = 3
|
||||
|
||||
[persist]
|
||||
## Save peer profiles on disk (default: true)
|
||||
# profiles = true
|
||||
## Save full addresses on disk (default: true)
|
||||
# addressbook = true
|
||||
|
||||
[cpuext]
|
||||
## Use CPU AES-NI instructions set when work with cryptography when available (default: true)
|
||||
# aesni = true
|
||||
## Use CPU AVX instructions set when work with cryptography when available (default: true)
|
||||
# avx = true
|
||||
## Force usage of CPU instructions set, even if they not found
|
||||
## DO NOT TOUCH that option if you really don't know what are you doing!
|
||||
# force = false
|
10
endgamefiles/jail.local
Normal file
10
endgamefiles/jail.local
Normal file
@ -0,0 +1,10 @@
|
||||
[DEFAULT]
|
||||
ignoreip = 127.0.0.1/8
|
||||
bantime = 10m
|
||||
findtime = 10m
|
||||
maxretry = 5
|
||||
banaction = iptables-multiport
|
||||
backend = auto
|
||||
|
||||
[sshd]
|
||||
enabled = true
|
52
endgamefiles/limits.conf
Normal file
52
endgamefiles/limits.conf
Normal file
@ -0,0 +1,52 @@
|
||||
# /etc/security/limits.conf
|
||||
#
|
||||
#Each line describes a limit for a user in the form:
|
||||
#
|
||||
#<domain> <type> <item> <value>
|
||||
#
|
||||
#Where:
|
||||
#<domain> can be:
|
||||
# - a user name
|
||||
# - a group name, with @group syntax
|
||||
# - the wildcard *, for default entry
|
||||
# - the wildcard %, can be also used with %group syntax,
|
||||
# for maxlogin limit
|
||||
# - NOTE: group and wildcard limits are not applied to root.
|
||||
# To apply a limit to the root user, <domain> must be
|
||||
# the literal username root.
|
||||
#
|
||||
#<type> can have the two values:
|
||||
# - "soft" for enforcing the soft limits
|
||||
# - "hard" for enforcing hard limits
|
||||
#
|
||||
#<item> can be one of the following:
|
||||
# - core - limits the core file size (KB)
|
||||
# - data - max data size (KB)
|
||||
# - fsize - maximum filesize (KB)
|
||||
# - memlock - max locked-in-memory address space (KB)
|
||||
# - nofile - max number of open file descriptors
|
||||
# - rss - max resident set size (KB)
|
||||
# - stack - max stack size (KB)
|
||||
# - cpu - max CPU time (MIN)
|
||||
# - nproc - max number of processes
|
||||
# - as - address space limit (KB)
|
||||
# - maxlogins - max number of logins for this user
|
||||
# - maxsyslogins - max number of logins on the system
|
||||
# - priority - the priority to run user process with
|
||||
# - locks - max number of file locks the user can hold
|
||||
# - sigpending - max number of pending signals
|
||||
# - msgqueue - max memory used by POSIX message queues (bytes)
|
||||
# - nice - max nice priority allowed to raise to values: [-20, 19]
|
||||
# - rtprio - max realtime priority
|
||||
# - chroot - change root to directory (Debian-specific)
|
||||
#
|
||||
#<domain> <type> <item> <value>
|
||||
#
|
||||
|
||||
root soft nofile 8192
|
||||
root hard nofile 8388608
|
||||
|
||||
#disable coredumps to prevent information leak
|
||||
* soft core 0
|
||||
* hard core 0
|
||||
# End of file
|
439
endgamefiles/lua/cap.lua
Normal file
439
endgamefiles/lua/cap.lua
Normal file
@ -0,0 +1,439 @@
|
||||
aes = require "resty.aes"
|
||||
hmac = require "resty.hmac"
|
||||
str = require "resty.string"
|
||||
cook = require "resty.cookie"
|
||||
random = require "resty.random"
|
||||
sha256 = require "resty.sha256"
|
||||
|
||||
-- encryption key and salt must be shared across fronts. salt must be 8 chars
|
||||
local key = "encryption_key"
|
||||
local salt = "salt1234"
|
||||
-- for how long the captcha is valid. 120 sec is for testing, 3600 1 hour should be production.
|
||||
local session_timeout = sessionconfigvalue
|
||||
|
||||
-- needed for reading the master key
|
||||
function fromhex(hex_str)
|
||||
local bin_str = ""
|
||||
|
||||
for i = 1, #hex_str, 2 do
|
||||
local hex_char = string.sub(hex_str, i, i+1)
|
||||
bin_str = bin_str .. string.char(tonumber(hex_char, 16))
|
||||
end
|
||||
|
||||
return bin_str
|
||||
end
|
||||
|
||||
-- generated in setup.sh based on the encryption key using PBKDF2, which hardens it
|
||||
-- against bruteforce attacks, making the implementation a little more foolproof, here's the command used:
|
||||
-- OPENSSL 3:
|
||||
-- openssl kdf -keylen 32 -kdfopt digest:SHA256 -kdfopt pass:$KEY -kdfopt salt:$SALT -kdfopt iter:2000000 PBKDF2 | sed s/://g
|
||||
-- OPENSSL 1.1.1n:
|
||||
-- openssl enc -aes-256-cbc -pbkdf2 -pass pass:$KEY -S $SALT_HEX -iter 2000000 -md sha256 -P | grep "key" | sed s/key=//g
|
||||
local master_key = fromhex("masterkeymasterkeymasterkey")
|
||||
|
||||
b = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"
|
||||
|
||||
function base64_encode(data)
|
||||
return ((data:gsub('.', function(x)
|
||||
local r,b='',x:byte()
|
||||
for i=8,1,-1 do r=r..(b%2^i-b%2^(i-1)>0 and '1' or '0') end
|
||||
return r;
|
||||
end)..'0000'):gsub('%d%d%d?%d?%d?%d?', function(x)
|
||||
if (#x < 6) then return '' end
|
||||
local c=0
|
||||
for i=1,6 do c=c+(x:sub(i,i)=='1' and 2^(6-i) or 0) end
|
||||
return b:sub(c+1,c+1)
|
||||
end)..({ '', '==', '=' })[#data%3+1])
|
||||
end
|
||||
|
||||
function base64_decode(data)
|
||||
data = string.gsub(data, '[^'..b..'=]', '')
|
||||
return (data:gsub('.', function(x)
|
||||
if (x == '=') then return '' end
|
||||
local r,f='',(b:find(x)-1)
|
||||
for i=6,1,-1 do r=r..(f%2^i-f%2^(i-1)>0 and '1' or '0') end
|
||||
return r;
|
||||
end):gsub('%d%d%d?%d?%d?%d?%d?%d?', function(x)
|
||||
if (#x ~= 8) then return '' end
|
||||
local c=0
|
||||
for i=1,8 do c=c+(x:sub(i,i)=='1' and 2^(8-i) or 0) end
|
||||
return string.char(c)
|
||||
end))
|
||||
end
|
||||
|
||||
function hmac_digest(key, data)
|
||||
local hmac_sha256_lib = hmac:new(key, hmac.ALGOS.SHA256)
|
||||
hmac_sha256_lib:update(data)
|
||||
return hmac_sha256_lib:final()
|
||||
end
|
||||
|
||||
function sha256_digest(data)
|
||||
local sha256_lib = sha256:new()
|
||||
sha256_lib:update(data)
|
||||
return sha256_lib:final()
|
||||
end
|
||||
|
||||
-- This function encrypts the cookie and outputs it ready for use in the following format : base64(cookie_token + cookie_ciphertext + cookie_tag)
|
||||
-- cookie_token is 32 bytes
|
||||
-- cookie_ciphertext is variable
|
||||
-- cookie_tag is 16 bytes
|
||||
function encrypt(cookie_plaintext)
|
||||
local cookie_token = sha256_digest(random.token(32))
|
||||
local derived_key = hmac_digest(master_key, cookie_token)
|
||||
local aes_ctx = aes:new(derived_key, salt, aes.cipher(256, "gcm"), aes.hash.sha256, 1, 12)
|
||||
local encrypted = aes_ctx:encrypt(cookie_plaintext)
|
||||
return base64_encode(cookie_token .. encrypted[1] .. encrypted[2])
|
||||
end
|
||||
|
||||
-- This function decrypts the cookie as it is received, no need to decode base64 or parse anything.
|
||||
-- returns nil if any step of the decryption fails
|
||||
function decrypt(cookie_ciphertext)
|
||||
local decoded_cookie = base64_decode(cookie_ciphertext)
|
||||
-- cookie should be at least 49 bytes long (32 for the token + 16 for the tag + at least 1 for the content)
|
||||
if (#decoded_cookie <= 48) then
|
||||
return nil, "Decoded cookie too short (<= 48 bytes)"
|
||||
end
|
||||
-- parsing the cookie
|
||||
local cookie_token = string.sub(decoded_cookie, 1, 32)
|
||||
local cookie_ciphertext = string.sub(decoded_cookie, 33, (#decoded_cookie - 16))
|
||||
local cookie_tag = string.sub(decoded_cookie, (#decoded_cookie - 15), #decoded_cookie)
|
||||
-- deriving the key and setting up AES context
|
||||
local derived_key = hmac_digest(master_key, cookie_token)
|
||||
local aes_ctx = aes:new(derived_key, salt, aes.cipher(256, "gcm"), aes.hash.sha256, 1, 12)
|
||||
return aes_ctx:decrypt(cookie_ciphertext, cookie_tag)
|
||||
end
|
||||
|
||||
function killconnection(pa)
|
||||
if pa ~= "no_proxy" then
|
||||
local ok, err = ngx.timer.at(0, kill_circuit, ngx.var.remote_addr, ngx.var.proxy_protocol_addr)
|
||||
if not ok then
|
||||
ngx.log(ngx.ERR, "failed to create timer: ", err)
|
||||
return
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
function blockcookies(field)
|
||||
ngx.shared.blocked_cookies:set(field, 1, 3600)
|
||||
end
|
||||
|
||||
function generalerror()
|
||||
ngx.header.content_type = "text/plain"
|
||||
ngx.say("403 DDOS filter killed your path. (You probably sent too many requests at once). Not calling you a bot, bot, but grab a new identity and try again.")
|
||||
ngx.flush()
|
||||
ngx.exit(200)
|
||||
end
|
||||
|
||||
function sessionexpired()
|
||||
ngx.header.content_type = "text/html"
|
||||
ngx.say('<h1>EndGame Session has expired</h1> <h3>and the post request was not processed.</h3> <p><a target="_blank" href="/">After you pass another captcha</a> (clicking opens new tab), you can reload this page (press F5) and submit the request again to prevent data loss. <b>If you leave this page without submitting again, what you just submitted will be lost.</b></p>')
|
||||
ngx.flush()
|
||||
ngx.exit(401)
|
||||
end
|
||||
|
||||
function killblockdrop(pa, field)
|
||||
if pa ~= nil then
|
||||
killconnection(pa)
|
||||
end
|
||||
if field ~= nil then
|
||||
blockcookies(field)
|
||||
end
|
||||
ngx.exit(444)
|
||||
end
|
||||
|
||||
local cookie, err = cook:new()
|
||||
if not cookie then
|
||||
ngx.log(ngx.ERR, err)
|
||||
return
|
||||
end
|
||||
|
||||
-- check proxy_protocol_addr if present kill circuit if needed
|
||||
pa = "no_proxy"
|
||||
if ngx.var.proxy_protocol_addr ~= nil then
|
||||
pa = ngx.var.proxy_protocol_addr
|
||||
end
|
||||
|
||||
-- if "Host" header is invalid / missing kill circuit and return nothing
|
||||
if in_array(allowed_hosts, ngx.var.http_host) == nil then
|
||||
ngx.log(ngx.ERR, "Wrong host (" .. ngx.var.http_host .. ") " .. ngx.var.remote_addr .. "|" .. pa)
|
||||
killblockdrop(pa, nil)
|
||||
end
|
||||
|
||||
-- only GET and POST requests are allowed the others are not used.
|
||||
if ngx.var.request_method ~= "POST" and ngx.var.request_method ~= "GET" then
|
||||
ngx.log(ngx.ERR, "Wrong request (" .. ngx.var.request_method .. ") " .. ngx.var.remote_addr .. "|" .. pa)
|
||||
killblockdrop(pa, nil)
|
||||
end
|
||||
|
||||
-- requests without user-agent are usually invalid
|
||||
if ngx.var.http_user_agent == nil then
|
||||
ngx.log(ngx.ERR, "Missing user agent " .. ngx.var.remote_addr .. "|" .. pa)
|
||||
killblockdrop(pa, nil)
|
||||
end
|
||||
|
||||
-- POST without referer is invalid. some poorly configured clients may complain about this
|
||||
if ngx.var.request_method == "POST" and ngx.var.http_referer == nil then
|
||||
ngx.log(ngx.ERR, "Post without referer " .. ngx.var.remote_addr .. "|" .. pa)
|
||||
killblockdrop(pa, nil)
|
||||
end
|
||||
|
||||
-- get cookie
|
||||
local field, err = cookie:get("dcap")
|
||||
-- check if cookie is valid.
|
||||
if not err and field ~= nil then
|
||||
if type(field) ~= "string" then
|
||||
ngx.log(ngx.ERR, "Invalid dcap value! Not string!" .. ngx.var.remote_addr .. "|" .. pa)
|
||||
killblockdrop(pa, nil)
|
||||
end
|
||||
if not string.match(field, "^([A-Za-z0-9+/=]+)$") then
|
||||
ngx.log(ngx.ERR, "Invalid dcap value! Incorrect format! (" .. field .. ")" .. ngx.var.remote_addr .. "|" .. pa)
|
||||
killblockdrop(pa, nil)
|
||||
end
|
||||
end
|
||||
|
||||
-- check blacklisted by rate limiter. if it is show the client a message and exit. can get creative with this.
|
||||
local blocked_cookies = ngx.shared.blocked_cookies
|
||||
local bct, btcflags = blocked_cookies:get(field)
|
||||
if bct then
|
||||
generalerror()
|
||||
end
|
||||
|
||||
-- Check dcap cookie get variable to bypass endgame! Allows some cross site attacks! Enable if need this feature.
|
||||
|
||||
-- local args = ngx.req.get_uri_args(2)
|
||||
-- for key, val in pairs(args) do
|
||||
-- if key == "dcapset" then
|
||||
-- plaintext = aes_256_gcm_sha256x1:decrypt(fromhex(val))
|
||||
-- if not plaintext then
|
||||
-- killconnection(pa)
|
||||
-- blockcookies(field)
|
||||
-- ngx.exit(444)
|
||||
-- end
|
||||
-- cookdata = split(plaintext, "|")
|
||||
-- if (cookdata[1] == "captcha_solved") then
|
||||
-- if (tonumber(cookdata[2]) + session_timeout) > ngx.now() then
|
||||
-- local ok, err =
|
||||
-- cookie:set(
|
||||
-- {
|
||||
-- key = "dcap",
|
||||
-- value = val,
|
||||
-- path = "/",
|
||||
-- domain = ngx.var.host,
|
||||
-- httponly = true,
|
||||
-- max_age = math.floor((tonumber(cookdata[2]) + session_timeout)-ngx.now()+0.5),
|
||||
-- samesite = "Lax"
|
||||
-- })
|
||||
-- if not ok then
|
||||
-- ngx.log(ngx.ERR, err)
|
||||
-- return
|
||||
-- end
|
||||
-- field = val
|
||||
-- err = nil
|
||||
-- end
|
||||
-- end
|
||||
-- end
|
||||
-- end
|
||||
|
||||
caperror = nil
|
||||
|
||||
-- check cookie support similar to testcookie
|
||||
if ngx.var.request_method == "GET" then
|
||||
if err or field == nil then
|
||||
local ni = random.number(5,20)
|
||||
local tstamp = ngx.now() + ni
|
||||
local plaintext = random.token(random.number(5, 20)) .. "|queue|" .. tstamp .. "|" .. pa .. "|"
|
||||
local ciphertext = encrypt(plaintext)
|
||||
local ok, err =
|
||||
cookie:set(
|
||||
{
|
||||
key = "dcap",
|
||||
value = ciphertext,
|
||||
path = "/",
|
||||
domain = ngx.var.host,
|
||||
httponly = true,
|
||||
max_age = 30,
|
||||
samesite = "Lax"
|
||||
}
|
||||
)
|
||||
if not ok then
|
||||
ngx.log(ngx.ERR, err)
|
||||
return
|
||||
end
|
||||
ngx.header["Refresh"] = ni
|
||||
ngx.header.content_type = "text/html"
|
||||
local file = io.open("/etc/nginx/resty/queue.html")
|
||||
if not file then
|
||||
ngx.exit(500)
|
||||
end
|
||||
local queue, err = file:read("*a")
|
||||
file:close()
|
||||
ngx.say(queue)
|
||||
ngx.flush()
|
||||
ngx.exit(200)
|
||||
else
|
||||
plaintext = decrypt(field)
|
||||
if not plaintext then
|
||||
killblockdrop(pa, field)
|
||||
end
|
||||
cookdata = split(plaintext, "|")
|
||||
if (cookdata[2] == "queue") then
|
||||
if tonumber(cookdata[3]) > ngx.now() or ngx.now() > tonumber(cookdata[3]) + 60 then
|
||||
killblockdrop(pa, field)
|
||||
end
|
||||
|
||||
--in high levels of attack this system may make reachability of your service worse. But it protects against certain kinds of dcap caching attacks.
|
||||
if "no_proxy" ~= cookdata[4] then
|
||||
if pa ~= cookdata[4] then
|
||||
ngx.log(ngx.ERR, "QUEUE: Incorrect circuit id (" .. cookdata[4] .. ") for" .. pa)
|
||||
killblockdrop(pa, nil)
|
||||
end
|
||||
end
|
||||
|
||||
-- captcha generator functions
|
||||
require "caphtml"
|
||||
displaycapd(pa)
|
||||
ngx.flush()
|
||||
ngx.exit(200)
|
||||
elseif (cookdata[2] == "cap_not_solved") then
|
||||
if (tonumber(cookdata[3]) + 60) > ngx.now() then
|
||||
killconnection(pa)
|
||||
ngx.header.content_type = "text/html"
|
||||
ngx.say("<h1>THINK OF WHAT YOU HAVE DONE!</h1>")
|
||||
ngx.say("<p>That captcha was generated just for you. And look at what you did. Ignoring the captcha... not even giving an incorrect answer to his meaningless existence. You couldn't even give him false hope. Shame on you.</p>")
|
||||
ngx.say("<p>Don't immediately refresh for a new captcha! Try and fail. You must now wait about a minute for a new captcha to load.</p>")
|
||||
ngx.flush()
|
||||
ngx.exit(200)
|
||||
end
|
||||
require "caphtml"
|
||||
displaycapd(pa)
|
||||
ngx.flush()
|
||||
ngx.exit(200)
|
||||
elseif (cookdata[2] == "captcha_solved") then
|
||||
if (tonumber(cookdata[3]) + session_timeout) < ngx.now() then
|
||||
require "caphtml"
|
||||
caperror = "Session expired"
|
||||
displaycapd(pa)
|
||||
ngx.flush()
|
||||
ngx.exit(200)
|
||||
end
|
||||
else
|
||||
ngx.log(ngx.ERR, "No matching cook type data but valid parse! Encryption break? Cookie (" .. field .. ") [" .. plaintext .. "] circuit: " .. pa)
|
||||
killblockdrop(pa, field)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
if ngx.var.request_method == "POST" then
|
||||
--Will trigger under cookie loading error
|
||||
if err then
|
||||
sessionexpired()
|
||||
end
|
||||
|
||||
if field ~= nil then
|
||||
plaintext = decrypt(field)
|
||||
if not plaintext then
|
||||
killblockdrop(pa, field)
|
||||
end
|
||||
cookdata = split(plaintext, "|")
|
||||
if (cookdata[2] == "queue") then
|
||||
killblockdrop(pa, field)
|
||||
elseif (cookdata[2] == "captcha_solved") then
|
||||
return
|
||||
elseif (cookdata[2] == "cap_not_solved") then
|
||||
require "caphtml"
|
||||
if (tonumber(cookdata[3]) + session_timeout) < ngx.now() then
|
||||
require "caphtml"
|
||||
caperror = "Session expired"
|
||||
displaycapd(pa)
|
||||
ngx.flush()
|
||||
ngx.exit(200)
|
||||
end
|
||||
|
||||
cookdata = split(plaintext, "|")
|
||||
expiretime = tonumber(cookdata[3])
|
||||
if expiretime == nil or (tonumber(expiretime) + 60) < ngx.now() then
|
||||
caperror = "Captcha expired"
|
||||
displaycapd(pa)
|
||||
ngx.flush()
|
||||
ngx.exit(200)
|
||||
end
|
||||
|
||||
-- resty has a library for parsing POST data but it's not really needed
|
||||
ngx.req.read_body()
|
||||
local dataraw = ngx.req.get_body_data()
|
||||
if dataraw == nil then
|
||||
caperror = "You didn't submit anything. Try again."
|
||||
displaycapd(pa)
|
||||
ngx.flush()
|
||||
ngx.exit(200)
|
||||
end
|
||||
|
||||
local data = ngx.req.get_body_data()
|
||||
data = split(data, "&")
|
||||
local sentcap = ""
|
||||
local splitvalue = ""
|
||||
for index, value in ipairs(data) do
|
||||
if index > string.len(cookdata[5]) then
|
||||
ngx.log(ngx.ERR, "CAPTCHA SOLVE POST: EXCESSIVELY LONG ANSWER POST FOR ANSWER (" .. cookdata[5] .. ") for" .. pa)
|
||||
killblockdrop(pa, field)
|
||||
break
|
||||
end
|
||||
splitvalue = split(value, "=")[2]
|
||||
if splitvalue == nil then
|
||||
caperror = "You Got That Wrong. Try again"
|
||||
displaycapd(pa)
|
||||
ngx.flush()
|
||||
ngx.exit(200)
|
||||
end
|
||||
sentcap = sentcap .. splitvalue
|
||||
end
|
||||
|
||||
--in high levels of attack this system may make reachability of your service worse. But it protects against certain kinds of dcap caching attacks.
|
||||
if "no_proxy" ~= cookdata[4] then
|
||||
if pa ~= cookdata[4] then
|
||||
ngx.log(ngx.ERR, "CAPTCHA SOLVE POST: Incorrect circuit id (" .. cookdata[4] .. ") for" .. pa)
|
||||
killblockdrop(pa, field)
|
||||
end
|
||||
end
|
||||
|
||||
if string.lower(sentcap) == string.lower(cookdata[5]) then
|
||||
--block valid sent cookies to prevent people from just sending the same solved solution over and over again
|
||||
blockcookies(field)
|
||||
cookdata[1] = random.token(random.number(5, 20))
|
||||
cookdata[2] = "captcha_solved"
|
||||
cookdata[3] = ngx.now()
|
||||
cookdata[6] = "0"
|
||||
local ciphertext = encrypt(table.concat(cookdata, "|"))
|
||||
local ok, err =
|
||||
cookie:set(
|
||||
{
|
||||
key = "dcap",
|
||||
value = ciphertext,
|
||||
path = "/",
|
||||
domain = ngx.var.host,
|
||||
httponly = true,
|
||||
max_age = session_timeout,
|
||||
samesite = "Lax"
|
||||
}
|
||||
)
|
||||
if not ok then
|
||||
ngx.say("cookie error")
|
||||
return
|
||||
end
|
||||
local redirect_to = ngx.var.uri
|
||||
if ngx.var.query_string ~= nil then
|
||||
redirect_to = redirect_to .. "?" .. ngx.var.query_string
|
||||
end
|
||||
return ngx.redirect(redirect_to)
|
||||
else
|
||||
caperror = "You Got That Wrong. Try again"
|
||||
end
|
||||
displaycapd(pa)
|
||||
ngx.flush()
|
||||
ngx.exit(200)
|
||||
end
|
||||
else
|
||||
--Will trigger when cookie could be loaded but field isn't valid. Sanity check stuff.
|
||||
sessionexpired()
|
||||
end
|
||||
end
|
84
endgamefiles/naxsi_core.rules
Normal file
84
endgamefiles/naxsi_core.rules
Normal file
@ -0,0 +1,84 @@
|
||||
##################################
|
||||
## INTERNAL RULES IDS:1-999 ##
|
||||
##################################
|
||||
#@MainRule "msg:weird request, unable to parse" id:1;
|
||||
#@MainRule "msg:request too big, stored on disk and not parsed" id:2;
|
||||
#@MainRule "msg:invalid hex encoding, null bytes" id:10;
|
||||
#@MainRule "msg:unknown content-type" id:11;
|
||||
#@MainRule "msg:invalid formatted url" id:12;
|
||||
#@MainRule "msg:invalid POST format" id:13;
|
||||
#@MainRule "msg:invalid POST boundary" id:14;
|
||||
#@MainRule "msg:invalid JSON" id:15;
|
||||
#@MainRule "msg:empty POST" id:16;
|
||||
#@MainRule "msg:libinjection_sql" id:17;
|
||||
#@MainRule "msg:libinjection_xss" id:18;
|
||||
#@MainRule "msg:no generic rules" id:19;
|
||||
#@MainRule "msg:bad utf8" id:20;
|
||||
|
||||
##################################
|
||||
## SQL Injections IDs:1000-1099 ##
|
||||
##################################
|
||||
#MainRule "rx:select|union|update|delete|insert|table|from|ascii|hex|unhex|drop|load_file|substr|group_concat|dumpfile" "msg:sql keywords" "mz:BODY|URL|ARGS|$HEADERS_VAR:Cookie" "s:$SQL:4" id:1000;
|
||||
#MainRule "str:\"" "msg:double quote" "mz:BODY|URL|ARGS|$HEADERS_VAR:Cookie" "s:$SQL:8,$XSS:8" id:1001;
|
||||
#MainRule "str:0x" "msg:0x, possible hex encoding" "mz:BODY|URL|ARGS|$HEADERS_VAR:Cookie" "s:$SQL:2" id:1002;
|
||||
## Hardcore rules
|
||||
MainRule "str:/*" "msg:mysql comment (/*)" "mz:URL|ARGS|$HEADERS_VAR:Cookie" "s:$SQL:8" id:1003;
|
||||
MainRule "str:*/" "msg:mysql comment (*/)" "mz:URL|ARGS|$HEADERS_VAR:Cookie" "s:$SQL:8" id:1004;
|
||||
MainRule "str:|" "msg:mysql keyword (|)" "mz:URL|ARGS|$HEADERS_VAR:Cookie" "s:$SQL:8" id:1005;
|
||||
MainRule "str:&&" "msg:mysql keyword (&&)" "mz:URL|ARGS|$HEADERS_VAR:Cookie" "s:$SQL:8" id:1006;
|
||||
## end of hardcore rules
|
||||
MainRule "str:--" "msg:mysql comment (--)" "mz:URL|ARGS|$HEADERS_VAR:Cookie" "s:$SQL:4" id:1007;
|
||||
MainRule "str:;" "msg:semicolon" "mz:URL|ARGS" "s:$SQL:4,$XSS:8" id:1008;
|
||||
#MainRule "str:=" "msg:equal sign in var, probable sql/xss" "mz:ARGS|BODY" "s:$SQL:2" id:1009;
|
||||
MainRule "str:(" "msg:open parenthesis, probable sql/xss" "mz:URL|$HEADERS_VAR:Cookie" "s:$SQL:4,$XSS:8" id:1010;
|
||||
MainRule "str:)" "msg:close parenthesis, probable sql/xss" "mz:URL|$HEADERS_VAR:Cookie" "s:$SQL:4,$XSS:8" id:1011;
|
||||
MainRule "str:'" "msg:simple quote" "mz:ARGS|URL|$HEADERS_VAR:Cookie" "s:$SQL:4,$XSS:8" id:1013;
|
||||
#MainRule "str:," "msg:comma" "mz:URL|ARGS|$HEADERS_VAR:Cookie" "s:$SQL:4" id:1015;
|
||||
#MainRule "str:#" "msg:mysql comment (#)" "mz:BODY|URL|ARGS|$HEADERS_VAR:Cookie" "s:$SQL:4" id:1016;
|
||||
MainRule "str:@@" "msg:double arobase (@@)" "mz:URL|ARGS|$HEADERS_VAR:Cookie" "s:$SQL:4" id:1017;
|
||||
|
||||
###############################
|
||||
## OBVIOUS RFI IDs:1100-1199 ##
|
||||
###############################
|
||||
#MainRule "str:http://" "msg:http:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1100;
|
||||
#MainRule "str:https://" "msg:https:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1101;
|
||||
MainRule "str:ftp://" "msg:ftp:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1102;
|
||||
MainRule "str:php://" "msg:php:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1103;
|
||||
MainRule "str:sftp://" "msg:sftp:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1104;
|
||||
MainRule "str:zlib://" "msg:zlib:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1105;
|
||||
MainRule "str:data://" "msg:data:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1106;
|
||||
MainRule "str:glob://" "msg:glob:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1107;
|
||||
MainRule "str:phar://" "msg:phar:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1108;
|
||||
MainRule "str:file://" "msg:file:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1109;
|
||||
MainRule "str:gopher://" "msg:gopher:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1110;
|
||||
MainRule "str:zip://" "msg:zip:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1111;
|
||||
MainRule "str:expect://" "msg:expect:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1112;
|
||||
MainRule "str:input://" "msg:input:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1113;
|
||||
|
||||
#######################################
|
||||
## Directory traversal IDs:1200-1299 ##
|
||||
#######################################
|
||||
MainRule "str:.." "msg:double dot" "mz:ARGS|URL|$HEADERS_VAR:Cookie" "s:$TRAVERSAL:4" id:1200;
|
||||
MainRule "str:/etc/passwd" "msg:obvious probe" "mz:ARGS|URL|BODY|$HEADERS_VAR:Cookie" "s:$TRAVERSAL:4" id:1202;
|
||||
MainRule "str:c:\\" "msg:obvious windows path" "mz:ARGS|URL|BODY|$HEADERS_VAR:Cookie" "s:$TRAVERSAL:4" id:1203;
|
||||
MainRule "str:cmd.exe" "msg:obvious probe" "mz:ARGS|URL|BODY|$HEADERS_VAR:Cookie" "s:$TRAVERSAL:4" id:1204;
|
||||
MainRule "str:\\" "msg:backslash" "mz:ARGS|URL|$HEADERS_VAR:Cookie" "s:$TRAVERSAL:4" id:1205;
|
||||
#MainRule "str:/" "msg:slash in args" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$TRAVERSAL:2" id:1206;
|
||||
MainRule "str:/..;/" "msg:dir traversal bypass" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$TRAVERSAL:2" id:1207;
|
||||
|
||||
########################################
|
||||
## Cross Site Scripting IDs:1300-1399 ##
|
||||
########################################
|
||||
MainRule "str:<" "msg:html open tag" "mz:ARGS|URL|$HEADERS_VAR:Cookie" "s:$XSS:8" id:1302;
|
||||
MainRule "str:>" "msg:html close tag" "mz:ARGS|URL|$HEADERS_VAR:Cookie" "s:$XSS:8" id:1303;
|
||||
#MainRule "str:[" "msg:[, possible js" "mz:BODY|URL|ARGS|$HEADERS_VAR:Cookie" "s:$XSS:4" id:1310;
|
||||
#MainRule "str:]" "msg:], possible js" "mz:BODY|URL|ARGS|$HEADERS_VAR:Cookie" "s:$XSS:4" id:1311;
|
||||
MainRule "str:~" "msg:~ character" "mz:URL|ARGS|$HEADERS_VAR:Cookie" "s:$XSS:4" id:1312;
|
||||
MainRule "str:`" "msg:grave accent !" "mz:ARGS|URL|$HEADERS_VAR:Cookie" "s:$XSS:8" id:1314;
|
||||
#MainRule "rx:%[2|3]." "msg:double encoding !" "mz:ARGS|URL|BODY|$HEADERS_VAR:Cookie" "s:$XSS:8" id:1315;
|
||||
|
||||
####################################
|
||||
## Evading tricks IDs: 1400-1500 ##
|
||||
####################################
|
||||
MainRule "str:&#" "msg:utf7/8 encoding" "mz:ARGS|BODY|URL|$HEADERS_VAR:Cookie" "s:$EVADE:4" id:1400;
|
||||
MainRule "str:%U" "msg:M$ encoding" "mz:ARGS|BODY|URL|$HEADERS_VAR:Cookie" "s:$EVADE:4" id:1401;
|
9
endgamefiles/naxsi_whitelist.rules
Normal file
9
endgamefiles/naxsi_whitelist.rules
Normal file
@ -0,0 +1,9 @@
|
||||
BasicRule wl:10;
|
||||
BasicRule wl:20;
|
||||
BasicRule wl:16;
|
||||
BasicRule wl:12;
|
||||
BasicRule wl:13;
|
||||
BasicRule wl:1310;
|
||||
BasicRule wl:1311;
|
||||
BasicRule wl:1008 "mz:$URL:/search/|ARGS";
|
||||
BasicRule wl:1013 "mz:$URL:/search/|ARGS";
|
56
endgamefiles/nginx-update.sh
Normal file
56
endgamefiles/nginx-update.sh
Normal file
@ -0,0 +1,56 @@
|
||||
#!/bin/bash
|
||||
apt-get update
|
||||
apt-get -y upgrade
|
||||
|
||||
nginx -V
|
||||
command="nginx -v"
|
||||
nginxv=$( ${command} 2>&1 )
|
||||
NGINXVERSION=$(echo $nginxv | grep -o '[0-9.]*$')
|
||||
|
||||
modulecache=$NGINXVERSION-modules.tar.gz
|
||||
if test -f $modulecache; then
|
||||
rm -R /etc/nginx/modules
|
||||
mkdir /etc/nginx/modules
|
||||
tar zxvf $modulecache
|
||||
mv modules /etc/nginx/modules/
|
||||
nginx -t
|
||||
exit 0
|
||||
else
|
||||
rm -R *-modules.tar.gz
|
||||
|
||||
fi
|
||||
|
||||
wget https://nginx.org/download/nginx-$NGINXVERSION.tar.gz
|
||||
tar -xzvf nginx-$NGINXVERSION.tar.gz
|
||||
|
||||
cp -R dependencies/* nginx-$NGINXVERSION/
|
||||
|
||||
cd nginx-$NGINXVERSION
|
||||
|
||||
export LUAJIT_LIB=/usr/local/lib
|
||||
export LD_LIBRARY_PATH=/usr/local/lib
|
||||
export LUAJIT_INC=/usr/local/include/luajit-2.1
|
||||
./configure \
|
||||
--with-ld-opt="-Wl,-rpath,/usr/local/libm,-lpcre" \
|
||||
--with-compat \
|
||||
--add-dynamic-module=naxsi/naxsi_src \
|
||||
--add-dynamic-module=headers-more-nginx-module \
|
||||
--add-dynamic-module=echo-nginx-module \
|
||||
--add-dynamic-module=ngx_devel_kit \
|
||||
--add-dynamic-module=lua-nginx-module
|
||||
|
||||
make -j8 modules
|
||||
|
||||
cp -r objs modules
|
||||
rm -R /etc/nginx/modules
|
||||
mkdir /etc/nginx/modules
|
||||
tar -zcvf $modulecache modules
|
||||
mv modules /etc/nginx/modules
|
||||
|
||||
cd ..
|
||||
mv nginx-$NGINXVERSION/$modulecache $modulecache
|
||||
rm -R nginx-*.tar.gz
|
||||
rm -R nginx-$NGINXVERSION
|
||||
|
||||
nginx -t
|
||||
exit 0
|
80
endgamefiles/nginx.conf
Normal file
80
endgamefiles/nginx.conf
Normal file
@ -0,0 +1,80 @@
|
||||
user www-data;
|
||||
worker_processes auto;
|
||||
pid /var/run/nginx.pid;
|
||||
load_module modules/modules/ngx_http_headers_more_filter_module.so;
|
||||
load_module modules/modules/ngx_http_naxsi_module.so;
|
||||
load_module modules/modules/ngx_http_echo_module.so;
|
||||
load_module modules/modules/ndk_http_module.so;
|
||||
load_module modules/modules/ngx_http_lua_module.so;
|
||||
|
||||
events {
|
||||
worker_connections 8096;
|
||||
}
|
||||
|
||||
http {
|
||||
|
||||
##
|
||||
# Basic Settings
|
||||
##
|
||||
server_tokens off;
|
||||
|
||||
# Keep Alive
|
||||
sendfile on;
|
||||
tcp_nopush on;
|
||||
tcp_nodelay on;
|
||||
|
||||
reset_timedout_connection on;
|
||||
|
||||
lua_shared_dict blocked_cookies 250M;
|
||||
|
||||
# Timeouts
|
||||
client_body_timeout 20s;
|
||||
client_header_timeout 20s;
|
||||
keepalive_timeout 600s;
|
||||
send_timeout 20s;
|
||||
client_max_body_size 10m;
|
||||
client_body_buffer_size 10m;
|
||||
proxy_connect_timeout 120s;
|
||||
proxy_send_timeout 20s;
|
||||
proxy_read_timeout 20s;
|
||||
directio 8m;
|
||||
directio_alignment 4k;
|
||||
|
||||
log_format detailed escape=json
|
||||
'{'
|
||||
'"timestamp": "$time_iso8601",'
|
||||
'"remote_addr": "$remote_addr",'
|
||||
'"upstream_addr": "$upstream_addr",'
|
||||
'"connection": "$connection",'
|
||||
'"connection_requests": "$connection_requests",'
|
||||
'"request_time": "$request_time",'
|
||||
'"upstream_response_time": "$upstream_response_time",'
|
||||
'"status": "$status",'
|
||||
'"upstream_status": "$upstream_status",'
|
||||
'"body_bytes_sent": "$body_bytes_sent ",'
|
||||
'"request": "$request",'
|
||||
'"http_user_agent": "$http_user_agent",'
|
||||
'"cookies": "$http_cookie"'
|
||||
'}';
|
||||
access_log /var/log/nginx/access.log;
|
||||
proxy_redirect off;
|
||||
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
include /etc/nginx/naxsi_core.rules;
|
||||
|
||||
gzip on;
|
||||
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xm;
|
||||
gzip_proxied no-cache no-store private expired auth;
|
||||
gzip_min_length 1000;
|
||||
gzip_comp_level 9;
|
||||
|
||||
#add_header X-Content-Type-Options "nosniff";
|
||||
#add_header X-Frame-Options "SAMEORIGIN";
|
||||
#add_header X-Xss-Protection "1; mode=block";
|
||||
|
||||
##
|
||||
# Virtual Host Configs
|
||||
##
|
||||
include /etc/nginx/sites-enabled/*;
|
||||
}
|
BIN
endgamefiles/repokeys/i2pd.gpg
Normal file
BIN
endgamefiles/repokeys/i2pd.gpg
Normal file
Binary file not shown.
BIN
endgamefiles/repokeys/nginx.gpg
Normal file
BIN
endgamefiles/repokeys/nginx.gpg
Normal file
Binary file not shown.
BIN
endgamefiles/repokeys/tor-project.gpg
Normal file
BIN
endgamefiles/repokeys/tor-project.gpg
Normal file
Binary file not shown.
BIN
endgamefiles/resty.tgz
Normal file
BIN
endgamefiles/resty.tgz
Normal file
Binary file not shown.
149
endgamefiles/resty/cap_d.css
Normal file
149
endgamefiles/resty/cap_d.css
Normal file
File diff suppressed because one or more lines are too long
222
endgamefiles/resty/caphtml.lua
Normal file
222
endgamefiles/resty/caphtml.lua
Normal file
@ -0,0 +1,222 @@
|
||||
function displaycapd(pa)
|
||||
ngx.header.content_type = "text/html"
|
||||
local cookie, err = cook:new()
|
||||
if not cookie then
|
||||
ngx.log(ngx.ERR, err)
|
||||
ngx.say("cookie error")
|
||||
ngx.exit(200)
|
||||
end
|
||||
|
||||
local blocked_cookies = ngx.shared.blocked_cookies
|
||||
local field, err = cookie:get("dcap")
|
||||
plaintext = decrypt(field)
|
||||
cookdata = split(plaintext, "|")
|
||||
|
||||
if (cookdata[2] == "cap_not_solved") then
|
||||
if (cookdata[6] == "3") then
|
||||
blocked_cookies:set(field, 1, 120)
|
||||
local ni = random.number(5,20)
|
||||
local tstamp = ngx.now() + ni
|
||||
local plaintext = random.token(random.number(5, 20)) .. "|queue|" .. tstamp .. "|" .. pa .. "|"
|
||||
local ciphertext = encrypt(plaintext)
|
||||
cookie:set(
|
||||
{
|
||||
key = "dcap",
|
||||
value = ciphertext,
|
||||
path = "/",
|
||||
domain = ngx.var.host,
|
||||
httponly = true,
|
||||
max_age = 30,
|
||||
samesite = "Lax"
|
||||
})
|
||||
ngx.header["Refresh"] = ni
|
||||
ngx.header.content_type = "text/html"
|
||||
local file = io.open("/etc/nginx/resty/queue.html")
|
||||
local queue, err = file:read("*a")
|
||||
file:close()
|
||||
ngx.say(queue)
|
||||
ngx.flush()
|
||||
ngx.exit(200)
|
||||
end
|
||||
end
|
||||
|
||||
local function getChallenge()
|
||||
local success, module = pcall(require, "challenge")
|
||||
if not success then
|
||||
ngx.header["Refresh"] = '5'
|
||||
ngx.say("Captcha racetime condition hit. Refreshing in 5 seconds.")
|
||||
ngx.exit(200)
|
||||
end
|
||||
local ni = random.number(0,49)
|
||||
if challengeArray[ni] ~= nil then
|
||||
local challenge = challengeArray[ni]
|
||||
return split(challenge, "*")
|
||||
else
|
||||
ngx.header["Refresh"] = '5'
|
||||
ngx.say("Captcha racetime condition hit. Refreshing in 5 seconds.")
|
||||
ngx.exit(200)
|
||||
end
|
||||
end
|
||||
|
||||
local im = getChallenge()
|
||||
local challengeStyle = im[1]
|
||||
local challengeAnswer = im[2]
|
||||
local challengeImage = im[3]
|
||||
|
||||
local tstamp = ngx.now()
|
||||
local newcookdata = random.token(random.number(5, 20)) .. "|cap_not_solved|" .. tstamp .. "|" .. pa .. "|" .. challengeAnswer
|
||||
|
||||
if (cookdata[2] == "queue") then
|
||||
newcookdata = newcookdata .. "|1"
|
||||
else
|
||||
newcookdata = newcookdata .. "|" .. tonumber(cookdata[6] + 1)
|
||||
end
|
||||
local ciphertext = encrypt(newcookdata)
|
||||
local ok, err =
|
||||
cookie:set(
|
||||
{
|
||||
key = "dcap",
|
||||
value = ciphertext,
|
||||
path = "/",
|
||||
domain = ngx.var.host,
|
||||
httponly = true,
|
||||
samesite = "Lax"
|
||||
}
|
||||
)
|
||||
|
||||
blocked_cookies:set(field, 1, 120)
|
||||
|
||||
if not ok then
|
||||
ngx.say("cookie error")
|
||||
ngx.exit(200)
|
||||
end
|
||||
|
||||
ngx.say([[<!DOCTYPE html>
|
||||
<html lang=en>
|
||||
<head>
|
||||
<title>DDOS Protection</title>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<link id="favicon" rel="shortcut icon" href="FAVICON">
|
||||
</head><body><style>]])
|
||||
|
||||
local file = io.open("/etc/nginx/resty/cap_d.css")
|
||||
|
||||
if not file then
|
||||
ngx.exit(500)
|
||||
end
|
||||
|
||||
local css, err = file:read("*a")
|
||||
|
||||
file:close()
|
||||
|
||||
ngx.say(css)
|
||||
|
||||
ngx.say(challengeStyle)
|
||||
|
||||
ngx.say([[</style>
|
||||
<div class="container">
|
||||
<div class="left">
|
||||
<div class="networkLogo slide-right-ani">
|
||||
<div class="square"></div>
|
||||
<div class="text">
|
||||
<span>SITENAME</span>
|
||||
<div class="sm">network</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="cont">
|
||||
<div class="serviceLogo slide-right-ani">
|
||||
<div class="square"></div>
|
||||
<div class="text">SITENAME</div>
|
||||
</div>
|
||||
<div class="tagline slide-right-ani">SITETAGLINE</div>
|
||||
<div class="since slide-right-ani">since SITESINCE</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="inner">]])
|
||||
if caperror ~= nil then
|
||||
ngx.say('<p class="slide-left-ani alert"><strong>' .. caperror .. '</strong></p>')
|
||||
else
|
||||
ngx.say('<p class="slide-left-ani">Select each text box and enter the letter or number you see within the circle below.</p>')
|
||||
end
|
||||
ngx.say([[<form class="ddos_form slide-left-ani" method="post">
|
||||
<div class="input-box">
|
||||
<input class="ch" type="text" name="c1" maxlength="1" pattern="[A-Za-z0-9]" autocomplete="off" autofocus>]])
|
||||
for i = 2, 6, 1 do
|
||||
ngx.say('<input class="ch" type="text" name="c' .. i .. '" maxlength="1" pattern="[A-Za-z0-9]" autocomplete="off">')
|
||||
end
|
||||
ngx.say('<div class="image" style="background-image:url(data:image/webp;base64,' .. challengeImage .. ');"></div>')
|
||||
ngx.say([[</div>
|
||||
<div class="expire">
|
||||
<div class="timer">
|
||||
<div class="time-part-wrapper">
|
||||
<div class="time-part seconds tens">
|
||||
<div class="digit-wrapper">
|
||||
<span class="digit">0</span>
|
||||
<span class="digit">5</span>
|
||||
<span class="digit">4</span>
|
||||
<span class="digit">3</span>
|
||||
<span class="digit">2</span>
|
||||
<span class="digit">1</span>
|
||||
<span class="digit">0</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="time-part seconds ones">
|
||||
<div class="digit-wrapper">
|
||||
<span class="digit">0</span>
|
||||
<span class="digit">9</span>
|
||||
<span class="digit">8</span>
|
||||
<span class="digit">7</span>
|
||||
<span class="digit">6</span>
|
||||
<span class="digit">5</span>
|
||||
<span class="digit">4</span>
|
||||
<span class="digit">3</span>
|
||||
<span class="digit">2</span>
|
||||
<span class="digit">1</span>
|
||||
<span class="digit">0</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="time-part-wrapper">
|
||||
<div class="time-part hundredths tens">
|
||||
<div class="digit-wrapper">
|
||||
<span class="digit">0</span>
|
||||
<span class="digit">9</span>
|
||||
<span class="digit">8</span>
|
||||
<span class="digit">7</span>
|
||||
<span class="digit">6</span>
|
||||
<span class="digit">5</span>
|
||||
<span class="digit">4</span>
|
||||
<span class="digit">3</span>
|
||||
<span class="digit">2</span>
|
||||
<span class="digit">1</span>
|
||||
<span class="digit">0</span>
|
||||
</div>
|
||||
</div>
|
||||
<div class="time-part hundredths ones">
|
||||
<div class="digit-wrapper">
|
||||
<span class="digit">0</span>
|
||||
<span class="digit">9</span>
|
||||
<span class="digit">8</span>
|
||||
<span class="digit">7</span>
|
||||
<span class="digit">6</span>
|
||||
<span class="digit">5</span>
|
||||
<span class="digit">4</span>
|
||||
<span class="digit">3</span>
|
||||
<span class="digit">2</span>
|
||||
<span class="digit">1</span>
|
||||
<span class="digit">0</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div><button class="before" type="submit">Submit</button>
|
||||
<button class="expired" type="submit"> Refresh (expired)</button>
|
||||
</form>
|
||||
</div>
|
||||
</div>
|
||||
</body>
|
||||
</html>]])
|
||||
--if you need the answer right away for testing
|
||||
--ngx.say(challengeAnswer)
|
||||
end
|
BIN
endgamefiles/resty/captcha
Normal file
BIN
endgamefiles/resty/captcha
Normal file
Binary file not shown.
1
endgamefiles/resty/queue.html
Normal file
1
endgamefiles/resty/queue.html
Normal file
@ -0,0 +1 @@
|
||||
<!DOCTYPE html><html lang=en><title>SITENAME Access Queue</title><meta charset=UTF-8><meta content="width=device-width,initial-scale=1"name=viewport><link href="FAVICON"id=favicon rel="shortcut icon"><body><style>html{box-sizing:border-box}*,:after,:before{box-sizing:inherit}a,abbr,acronym,address,applet,article,aside,audio,b,big,blockquote,body,canvas,caption,center,cite,code,dd,del,details,dfn,div,dl,dt,em,embed,fieldset,figcaption,figure,footer,form,h1,h2,h3,h4,h5,h6,header,hgroup,html,i,iframe,img,ins,kbd,label,legend,li,mark,menu,nav,object,ol,output,p,pre,q,ruby,s,samp,section,small,span,strike,strong,sub,summary,sup,table,tbody,td,tfoot,th,thead,time,tr,tt,u,ul,var,video{margin:0;padding:0;border:0;font-size:100%;font:inherit;vertical-align:baseline}article,aside,details,figcaption,figure,footer,header,hgroup,menu,nav,section{display:block}strong{font-weight:700}body{line-height:1}ol,ul{list-style:none}blockquote,q{quotes:none}blockquote:after,blockquote:before,q:after,q:before{content:'';content:none}table{border-collapse:collapse;border-spacing:0}:focus{outline:0}input,select,textarea{border:0;box-shadow:0}html{height:100%}body{height:100%;line-height:1;background:#1A1E23;font-family:roboto,helvetica,sans-serif,arial,verdana,tahoma;font-size:16px;color:#fff}.logobgimg{background-image:url(SQUARELOGO)}.container{width:100%;margin:0 auto;min-height:100%;position:relative;max-height:100vh;overflow:hidden}.container>.inner{position:absolute;top:50%;left:0;right:0;margin:0 auto;text-align:center;transform:translateY(-50%);padding:0 20px}.container>.inner>.logo{display:inline-block;vertical-align:middle;text-decoration:none;margin-bottom:10px}.container>.inner>.logo>.square{display:inline-block;vertical-align:middle;width:40px;height:40px;background-color:#HEXCOLOR;background-size:40px 40px;background-position:center center;background-repeat:no-repeat;margin-right:10px}.container>.inner>.logo>.text{display:inline-block;vertical-align:middle;font-size:30px;color:#fff;font-weight:700}.container>.inner>.date{display:block;text-align:center;font-size:42px}p{margin-bottom:10px}.eta{display:inline-block;vertical-align:top;padding:0 8px;height:26px;border-radius:2px;background-color:#HEXCOLOR;color:#fff;font-weight:700;line-height:26px}.queue-graphic{width:10px;height:120px;margin:10px auto;text-align:center}.queue-graphic>.user-wrap>.user-body{float:left;position:absolute}.queue-graphic>.user-wrap>.user-body>.user-head{width:20px;height:20px;border-radius:100%;background-color:#HEXCOLOR;position:absolute;margin-top:15px;margin-left:-5px}.queue-graphic>.user-wrap>.user-body>.user-torsoe{background-color:#HEXCOLOR;height:42px;width:20px;position:absolute;margin-top:32px;margin-left:-10px;border-radius:5px;background-size:60% auto;background-position:60% center;background-repeat:no-repeat;z-index:2}.queue-graphic>.user-wrap>.user-body>.user-arm-left,.queue-graphic>.user-wrap>.user-body>.user-arm-right{margin-top:38px;height:40px;width:8px;background:#HEXCOLOR;float:left;border-radius:5px;position:relative}.queue-graphic>.user-wrap>.user-body>.user-arm-left{margin-left:-6px;z-index:5}.queue-graphic>.user-wrap>.user-body>.user-arm-right{background-color:#HEXCOLORDARK;margin-left:0}.queue-graphic>.user-wrap>.user-leg-left,.queue-graphic>.user-wrap>.user-leg-right{margin-top:70px;height:40px;width:8px;background:#HEXCOLOR;border-radius:5px;float:left}.queue-graphic>.user-wrap>.user-leg-left{margin-left:-5px}.queue-graphic>.user-wrap>.user-leg-right{margin-left:-2px}.queue-graphic>.user-wrap{margin:0 auto}.queue-graphic>.user-wrap>.user-body>.user-arm-left,.queue-graphic>.user-wrap>.user-leg-right{transform-origin:0 0;animation:queue-anim-frames-1 .25s alternate infinite ease-out}.queue-graphic>.user-wrap>.user-body>.user-arm-right,.queue-graphic>.user-wrap>.user-leg-left{transform-origin:0 0;animation:queue-anim-frames-2 .25s alternate infinite ease-in}.queue-graphic>.user-wrap>.user-leg-right{background-color:#HEXCOLORDARK;z-index:-1;position:absolute}.queue-graphic>.user-wrap>.user-leg-left{z-index:2;}@keyframes queue-anim-frames-1{from{-webkit-transform:rotate(-50deg)}to{-webkit-transform:rotate(50deg)}}@keyframes queue-anim-frames-2{from{-webkit-transform:rotate(50deg)}to{-webkit-transform:rotate(-50deg)}}.lg{height:26px;line-height:26px;font-size:22px}</style><div class=container><div class=inner><div class=logo><div class="logobgimg square"></div><div class=text>SITENAME</div></div><p>You have been placed in a queue, awaiting forwarding to the platform.<div class=queue-graphic><div class=user-wrap><div class=user-body><div class=user-head></div><div class="logobgimg user-torsoe"></div><div class=user-arm-left></div><div class=user-arm-right></div></div><div class=user-leg-left></div><div class=user-leg-right></div></div></div><p class=lg>Your estimated wait time is <span class=eta><3 minutes</span><p>Please do not refresh the page, you will be automatically redirected.</div></div>
|
473
endgamefiles/setup.sh
Normal file
473
endgamefiles/setup.sh
Normal file
@ -0,0 +1,473 @@
|
||||
#!/bin/bash
|
||||
|
||||
#configuration
|
||||
source endgame.config
|
||||
|
||||
#OS
|
||||
source /etc/os-release
|
||||
|
||||
DIST="debian"
|
||||
RELEASE=$VERSION_CODENAME
|
||||
#RELEASE="bookworm"
|
||||
|
||||
if [[ "$ID" != "$DIST" || "$VERSION_CODENAME" != "$RELEASE" ]]; then
|
||||
echo "This EndGame version is only made for a install on $DIST $RELEASE. Please install it on the correct operating system!"
|
||||
fi
|
||||
|
||||
echo "Welcome To The End Game DDOS Prevention Setup..."
|
||||
if [ ${#MASTERONION} -lt 62 ]; then
|
||||
echo "#MASTERONION doesn't have the correct length. The url needs to include the .onion at the end."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ "$KEY" = "encryption_key" ]; then
|
||||
echo "Change the key variable to something which isn't the default value in setup.sh!"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ ${#SALT} -lt 8 ]; then
|
||||
echo "Salt variable doesn't have the correct length. Make sure it is exactly 8 characters long!"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ -z "$TORAUTHPASSWORD" ]; then
|
||||
echo "you didn't enter your tor authpassword"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ $(id -u) -ne 0 ] && ! sudo -n true > /dev/null 2>&1; then
|
||||
echo "Your user doesn't have the required permissions to run the endgame script! Login as root (recommended) or sudo this script."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "Proceeding to do the configuration and setup. This will take awhile."
|
||||
if $REBOOT; then
|
||||
echo -e "\e[1;35mThe system will reboot after finishing setup!"
|
||||
fi
|
||||
|
||||
sleep 5
|
||||
|
||||
echo "Generating Master Key... should only take a second..."
|
||||
SALT_HEX=$(echo -n "$SALT" | od -A n -t x1 | sed 's/ *//g')
|
||||
MASTER_KEY=$(openssl enc -aes-256-cbc -pbkdf2 -pass pass:$KEY -S $SALT_HEX -iter 2000000 -md sha256 -P | grep "key" | sed s/key=//g)
|
||||
echo "Done. MASTER_KEY = $MASTER_KEY"
|
||||
|
||||
if $TORSETUP; then
|
||||
### Tor configuration
|
||||
string="s/masterbalanceonion/"
|
||||
string+="$MASTERONION"
|
||||
string+="/g"
|
||||
sed -i $string site.conf
|
||||
|
||||
string="s/torauthpassword/"
|
||||
string+="$TORAUTHPASSWORD"
|
||||
string+="/g"
|
||||
sed -i $string site.conf
|
||||
|
||||
sed -i 's/--torconfig//' site.conf
|
||||
sed -i 's/#torconfig//' site.conf
|
||||
fi
|
||||
|
||||
if $I2PSETUP; then
|
||||
sed -i 's/--i2pconfig//' site.conf
|
||||
sed -i 's/#i2pconfig//' site.conf
|
||||
fi
|
||||
|
||||
# Nginx/Lua Configuration
|
||||
|
||||
string="s/encryption_key/"
|
||||
string+="$KEY"
|
||||
string+="/g"
|
||||
sed -i $string lua/cap.lua
|
||||
|
||||
string="s/salt1234/"
|
||||
string+="$SALT"
|
||||
string+="/g"
|
||||
sed -i $string lua/cap.lua
|
||||
|
||||
string="s/masterkeymasterkeymasterkey/"
|
||||
string+="$MASTER_KEY"
|
||||
string+="/g"
|
||||
sed -i $string lua/cap.lua
|
||||
|
||||
string="s/sessionconfigvalue/"
|
||||
string+="$SESSION_LENGTH"
|
||||
string+="/g"
|
||||
sed -i $string lua/cap.lua
|
||||
|
||||
string="s/sessionconfigvalue/"
|
||||
string+="$SESSION_LENGTH"
|
||||
string+="/g"
|
||||
sed -i $string site.conf
|
||||
|
||||
# Styling
|
||||
string="s/HEXCOLORDARK/"
|
||||
string+="$HEXCOLORDARK"
|
||||
string+="/g"
|
||||
sed -i $string resty/cap_d.css
|
||||
|
||||
string="s/HEXCOLOR/"
|
||||
string+="$HEXCOLOR"
|
||||
string+="/g"
|
||||
sed -i $string resty/cap_d.css
|
||||
|
||||
string="s|SQUARELOGO|"
|
||||
string+="$SQUARELOGO|"
|
||||
sed -i $string resty/cap_d.css
|
||||
|
||||
string="s|NETWORKLOGO|"
|
||||
string+="$NETWORKLOGO|"
|
||||
sed -i $string resty/cap_d.css
|
||||
|
||||
string="s/HEXCOLORDARK/"
|
||||
string+="$HEXCOLORDARK"
|
||||
string+="/g"
|
||||
sed -i $string resty/queue.html
|
||||
|
||||
string="s/HEXCOLOR/"
|
||||
string+="$HEXCOLOR"
|
||||
string+="/g"
|
||||
sed -i $string resty/queue.html
|
||||
|
||||
string="s/SITENAME/"
|
||||
string+="$SITENAME"
|
||||
string+="/g"
|
||||
sed -i $string resty/queue.html
|
||||
|
||||
string="s|FAVICON|"
|
||||
string+="$FAVICON|"
|
||||
sed -i $string resty/queue.html
|
||||
|
||||
string="s|SQUARELOGO|"
|
||||
string+="$SQUARELOGO|"
|
||||
sed -i $string resty/queue.html
|
||||
|
||||
string="s/SITENAME/"
|
||||
string+="$SITENAME"
|
||||
string+="/g"
|
||||
sed -i $string resty/caphtml.lua
|
||||
|
||||
string="s|SITETAGLINE|$SITETAGLINE|"
|
||||
sed -i "$string" resty/caphtml.lua
|
||||
|
||||
string="s/SITESINCE/"
|
||||
string+="$SITESINCE"
|
||||
string+="/g"
|
||||
sed -i $string resty/caphtml.lua
|
||||
|
||||
string="s|FAVICON|"
|
||||
string+="$FAVICON|"
|
||||
sed -i $string resty/caphtml.lua
|
||||
|
||||
if $LOCALPROXY; then
|
||||
string="s/#proxy_pass/"
|
||||
string+="proxy_pass"
|
||||
string+="/g"
|
||||
sed -i $string site.conf
|
||||
|
||||
string="s/backendurl/"
|
||||
string+="$PROXYPASSURL"
|
||||
string+="/g"
|
||||
sed -i $string site.conf
|
||||
|
||||
else
|
||||
string="s/HOSTNAME1/"
|
||||
string+="$BACKENDONION1"
|
||||
string+="/g"
|
||||
sed -i $string startup.sh
|
||||
|
||||
string="s/HOSTNAME2/"
|
||||
string+="$BACKENDONION2"
|
||||
string+="/g"
|
||||
sed -i $string startup.sh
|
||||
|
||||
sed -i 's/#t/t/' startup.sh
|
||||
sed -i 's/#n/n/' startup.sh
|
||||
|
||||
string="s/backendurl/"
|
||||
string+="tor"
|
||||
string+="/g"
|
||||
sed -i $string site.conf
|
||||
|
||||
fi
|
||||
|
||||
apt update
|
||||
apt install -y -q apt-transport-https lsb-release ca-certificates
|
||||
|
||||
echo "deb [signed-by=/etc/apt/trusted.gpg.d/nginx.gpg] https://nginx.org/packages/$DIST/ $RELEASE nginx" > /etc/apt/sources.list.d/nginx.list
|
||||
|
||||
#Update Kernel Version To Latest Unstable
|
||||
echo "deb https://deb.debian.org/debian unstable main contrib non-free" > /etc/apt/sources.list.d/kernel.list
|
||||
echo "deb-src https://deb.debian.org/debian unstable main contrib non-free" >> /etc/apt/sources.list.d/kernel.list
|
||||
mv aptpreferences /etc/apt/preferences
|
||||
|
||||
cd repokeys
|
||||
|
||||
#Main Nginx Repo key. You can get it at https://nginx.org/keys/nginx_signing.key. Expires on June 14 2024.
|
||||
mv nginx.gpg /etc/apt/trusted.gpg.d/nginx.gpg
|
||||
|
||||
if $TORSETUP || $LOCALPROXY; then
|
||||
echo "deb [signed-by=/usr/share/keyrings/tor-project.gpg] https://deb.torproject.org/torproject.org $RELEASE main" > /etc/apt/sources.list.d/tor.list
|
||||
echo "deb-src [signed-by=/usr/share/keyrings/tor-project.gpg] https://deb.torproject.org/torproject.org $RELEASE main" >> /etc/apt/sources.list.d/tor.list
|
||||
|
||||
#Only uncomment the below lines if you know what you are doing.
|
||||
#echo "deb [signed-by=/usr/share/keyrings/tor-project.gpg] https://deb.torproject.org/torproject.org tor-nightly-main-$RELEASE main" >> /etc/apt/sources.list.d/tor.list
|
||||
#echo "deb-src [signed-by=/usr/share/keyrings/tor-project.gpg] https://deb.torproject.org/torproject.org tor-nightly-main-$RELEASE main" >> /etc/apt/sources.list.d/tor.list
|
||||
|
||||
#Main Tor-Project Repo key. You can get it at https://deb.torproject.org/torproject.org/A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc. Autoupdated via the deb.torproject.org-keyring package
|
||||
mv tor-project.gpg /usr/share/keyrings/tor-project.gpg
|
||||
fi
|
||||
|
||||
if $I2PSETUP; then
|
||||
echo "deb [signed-by=/etc/apt/trusted.gpg.d/i2pd.gpg] https://repo.i2pd.xyz/$DIST $RELEASE main" > /etc/apt/sources.list.d/i2pd.list
|
||||
echo "deb-src [signed-by=/etc/apt/trusted.gpg.d/i2pd.gpg] https://repo.i2pd.xyz/$DIST $RELEASE main" >> /etc/apt/sources.list.d/i2pd.list
|
||||
#Main I2P Repo key. You can get it at https://repo.i2pd.xyz/r4sas.gpg
|
||||
mv i2pd.gpg /etc/apt/trusted.gpg.d/i2pd.gpg
|
||||
fi
|
||||
|
||||
cd ..
|
||||
|
||||
apt update
|
||||
apt install -y -q nginx build-essential zlib1g-dev libpcre3 libpcre3-dev uuid-dev gcc git wget curl libpcre2-dev linux-image-amd64 libpcre2-dev
|
||||
|
||||
if $TORSETUP || $LOCALPROXY; then
|
||||
apt install -y -q tor nyx socat deb.torproject.org-keyring
|
||||
fi
|
||||
|
||||
if $I2PSETUP; then
|
||||
apt install -y i2pd
|
||||
fi
|
||||
|
||||
apt-get -y -q upgrade
|
||||
apt-get -y -q full-upgrade
|
||||
|
||||
#hardening + compromise check tools
|
||||
apt install -y -q apt-listbugs needrestart debsecan debsums fail2ban libpam-tmpdir rkhunter chkrootkit rng-tools
|
||||
|
||||
#setup fail2ban
|
||||
mv jail.local /etc/fail2ban/jail.local
|
||||
systemctl restart fail2ban
|
||||
systemctl enable fail2ban
|
||||
|
||||
export LD_LIBRARY_PATH=/usr/local/lib
|
||||
export LUAJIT_LIB=/usr/local/lib
|
||||
export LUAJIT_INC=/usr/local/include/luajit-2.1
|
||||
echo "LUAJIT_LIB=/usr/local/lib" > /etc/environment
|
||||
echo "LUAJIT_INC=/usr/local/include/luajit-2.1" >> /etc/environment
|
||||
echo "LD_LIBRARY_PATH=/usr/local/lib" >> /etc/environment
|
||||
#Just in case the user is not using root
|
||||
echo "export LD_LIBRARY_PATH=/usr/local/lib" >> ~/.bashrc
|
||||
|
||||
mkdir building
|
||||
cp -R dependencies/* building
|
||||
cd building
|
||||
|
||||
cd luajit2
|
||||
make -j4 && make install
|
||||
cd ..
|
||||
|
||||
cd lua-resty-string
|
||||
make install
|
||||
cd ..
|
||||
|
||||
cd lua-resty-cookie
|
||||
make install
|
||||
cd ..
|
||||
|
||||
mkdir /usr/local/share/lua/5.1/resty/
|
||||
cp -a lua-resty-session/lib/resty/* /usr/local/share/lua/5.1/resty/
|
||||
|
||||
cd ..
|
||||
|
||||
rm -R /etc/nginx/resty/
|
||||
mkdir /etc/nginx/resty/
|
||||
ln -s /usr/local/share/lua/5.1/resty/ /etc/nginx/resty/
|
||||
|
||||
tar zxf resty.tgz -C /usr/local/share/lua/5.1/resty
|
||||
|
||||
./nginx-update.sh
|
||||
|
||||
mv nginx.conf /etc/nginx/nginx.conf
|
||||
mv naxsi_core.rules /etc/nginx/naxsi_core.rules
|
||||
mv naxsi_whitelist.rules /etc/nginx/naxsi_whitelist.rules
|
||||
rm -R /etc/nginx/lua
|
||||
mv lua /etc/nginx/
|
||||
mv resty/* /etc/nginx/resty/
|
||||
mkdir /etc/nginx/sites-enabled/
|
||||
mv site.conf /etc/nginx/sites-enabled/site.conf
|
||||
|
||||
chown -R www-data:www-data /etc/nginx/
|
||||
chown -R www-data:www-data /usr/local/lib/lua
|
||||
|
||||
rm /etc/rc.local
|
||||
#Create and enable startup script in a service
|
||||
chmod 500 startup.sh
|
||||
mv startup.sh /startup.sh
|
||||
cat <<EOF > /etc/systemd/system/endgame.service
|
||||
[Unit]
|
||||
Description=Endgame Startup Script Service
|
||||
|
||||
[Service]
|
||||
Type=forking
|
||||
ExecStart=/startup.sh
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
#Set startup service only bootable by root to prevent tampering
|
||||
chown root:root /etc/systemd/system/endgame.service
|
||||
chmod 600 /etc/systemd/system/endgame.service
|
||||
|
||||
#configure nginx with the proper environment variables and hardening
|
||||
cat <<EOF > /lib/systemd/system/nginx.service
|
||||
[Unit]
|
||||
Description=nginx - high performance web server
|
||||
Documentation=https://nginx.org/en/docs/
|
||||
After=network-online.target remote-fs.target nss-lookup.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=forking
|
||||
PIDFile=/var/run/nginx.pid
|
||||
ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf
|
||||
ExecReload=/bin/sh -c "/bin/kill -s HUP $(/bin/cat /var/run/nginx.pid)"
|
||||
ExecStop=/bin/sh -c "/bin/kill -s TERM $(/bin/cat /var/run/nginx.pid)"
|
||||
Environment="LD_LIBRARY_PATH=/usr/local/lib"
|
||||
ProtectHome=true
|
||||
NoNewPrivileges=true
|
||||
ProtectKernelTunables=true
|
||||
ProtectKernelLogs=true
|
||||
ProtectControlGroups=true
|
||||
ProtectKernelModules=yes
|
||||
KeyringMode=private
|
||||
ProtectClock=true
|
||||
ProtectHostname=true
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
systemctl daemon-reload
|
||||
systemctl enable endgame.service
|
||||
systemctl enable nginx.service
|
||||
|
||||
rm /etc/sysctl.conf
|
||||
mv sysctl.conf /etc/sysctl.conf
|
||||
mv limits.conf /etc/security/limits.conf
|
||||
|
||||
echo "*/5 * * * * root cd /etc/nginx/resty/ && ./captcha && nginx -s reload" > /etc/cron.d/endgame
|
||||
|
||||
# Update new log rotation configuration for nginx logs
|
||||
cat << EOF > /etc/logrotate.d/nginx
|
||||
/var/log/nginx/*.log {
|
||||
daily
|
||||
rotate 7
|
||||
missingok
|
||||
notifempty
|
||||
compress
|
||||
sharedscripts
|
||||
postrotate
|
||||
if [ -f /var/run/nginx.pid ]; then
|
||||
kill -USR1 `cat /var/run/nginx.pid`
|
||||
fi
|
||||
endscript
|
||||
}
|
||||
EOF
|
||||
|
||||
#make sure logrotate runs every single day
|
||||
echo "0 0 * * * /usr/sbin/logrotate -f /etc/logrotate.conf" > /etc/cron.d/logrotate
|
||||
|
||||
if $LOCALPROXY; then
|
||||
echo "localproxy enabled"
|
||||
else
|
||||
mv torrc2 /etc/tor/torrc2
|
||||
mv torrc3 /etc/tor/torrc3
|
||||
fi
|
||||
|
||||
if $TORSETUP; then
|
||||
pkill tor
|
||||
mv torrc /etc/tor/torrc
|
||||
|
||||
chown -R debian-tor:debian-tor /etc/tor/
|
||||
|
||||
torhash=$(tor --hash-password $TORAUTHPASSWORD| tail -c 62)
|
||||
string="s/hashedpassword/"
|
||||
string+="$torhash"
|
||||
string+="/g"
|
||||
sed -i $string /etc/tor/torrc
|
||||
|
||||
sleep 10
|
||||
tor
|
||||
sleep 20
|
||||
|
||||
TORHOSTNAME="$(cat /etc/tor/hidden_service/hostname)"
|
||||
string="s/mainonion/"
|
||||
string+="$TORHOSTNAME"
|
||||
string+="/g"
|
||||
sed -i $string /etc/nginx/sites-enabled/site.conf
|
||||
|
||||
echo "MasterOnionAddress $MASTERONION" > /etc/tor/hidden_service/ob_config
|
||||
|
||||
pkill tor
|
||||
sleep 10
|
||||
|
||||
sed -i "s/#HiddenServiceOnionBalanceInstance/HiddenServiceOnionBalanceInstance/g" /etc/tor/torrc
|
||||
|
||||
if $TORINTRODEFENSE; then
|
||||
sed -i "s/#HiddenServiceEnableIntroDoS/HiddenServiceEnableIntroDoS/g" /etc/tor/torrc
|
||||
fi
|
||||
if $TORPOWDEFENSE; then
|
||||
sed -i "s/#HiddenServicePoWDefensesEnabled/HiddenServicePoWDefensesEnabled/g" /etc/tor/torrc
|
||||
fi
|
||||
tor
|
||||
fi
|
||||
|
||||
if $I2PSETUP; then
|
||||
mv i2pd.conf /etc/i2pd/i2pd.conf
|
||||
mv tunnels.conf /etc/i2pd/tunnels.conf
|
||||
systemctl stop i2pd.service
|
||||
sleep 5
|
||||
systemctl start i2pd.service
|
||||
sleep 10
|
||||
I2PHOSTNAME=$(head -c 391 /var/lib/i2pd/endgame.dat | sha256sum | cut -f1 -d\ | xxd -r -p | base32 | tr '[:upper:]' '[:lower:]' | sed -r 's/=//g').b32.i2p
|
||||
### Tor configuration
|
||||
string="s/i2paddress/"
|
||||
string+="$I2PHOSTNAME"
|
||||
string+="/g"
|
||||
sed -i $string /etc/nginx/sites-enabled/site.conf
|
||||
fi
|
||||
|
||||
cd /etc/nginx/resty/ && ./captcha
|
||||
|
||||
rm -R /var/log/nginx/
|
||||
mkdir /var/log/nginx/
|
||||
chown www-data:www-data /var/log/nginx
|
||||
|
||||
mkdir /etc/nginx/cache/
|
||||
chown -R www-data:www-data /usr/local/share/lua/5.1/
|
||||
chown -R www-data:www-data /etc/nginx/
|
||||
|
||||
systemctl start nginx.service
|
||||
systemctl start endgame.service
|
||||
|
||||
echo "EndGame Setup Script Finished!"
|
||||
|
||||
if $TORSETUP; then
|
||||
echo "TOR Hostname:"
|
||||
echo $TORHOSTNAME
|
||||
echo "The address it to your gobalance config.yaml file!"
|
||||
fi
|
||||
|
||||
if $I2PSETUP; then
|
||||
echo "I2P Hostname:"
|
||||
echo $I2PHOSTNAME
|
||||
fi
|
||||
|
||||
if $REBOOT; then
|
||||
echo -e "\e[1;35mThis system will now reboot in 10 seconds!"
|
||||
sleep 10
|
||||
reboot
|
||||
fi
|
||||
|
||||
exit 0
|
337
endgamefiles/site.conf
Normal file
337
endgamefiles/site.conf
Normal file
@ -0,0 +1,337 @@
|
||||
#right now there is a lot of logging to error_log so during an attack those logs will fill the disk eventually.
|
||||
#a good idea would be to use a syslog server and log to a socket instead of a file for IO optimization
|
||||
#logging could also be disabled in production
|
||||
|
||||
#depending on cluster setup some things can be changed here.
|
||||
#keepalive 128; or proxy_bind on multiple local ips can be used to mitigate local port exhaustion
|
||||
#most likely with this setup it's not the case
|
||||
#if this runs on the same machine as the application server UNIX sockets should be used instead of TCP
|
||||
upstream tor {
|
||||
server unix:/run/tor_pass1.sock weight=10 fail_timeout=30s;
|
||||
server unix:/run/tor_pass2.sock weight=10 fail_timeout=30s;
|
||||
}
|
||||
|
||||
access_by_lua_no_postpone on;
|
||||
lua_package_path "/etc/nginx/resty/?.lua;;";
|
||||
|
||||
init_by_lua_block {
|
||||
allowed_hosts = {
|
||||
--torconfig "mainonion",
|
||||
--torconfig "masterbalanceonion",
|
||||
--i2pconfig "i2paddress"
|
||||
}
|
||||
|
||||
function in_array(tab, val)
|
||||
for index, value in ipairs(tab) do
|
||||
if value == val then
|
||||
return true
|
||||
end
|
||||
end
|
||||
return nil
|
||||
end
|
||||
|
||||
function split(str, sep)
|
||||
local result = {}
|
||||
local regex = ("([^%s]+)"):format(sep)
|
||||
for each in str:gmatch(regex) do
|
||||
table.insert(result, each)
|
||||
end
|
||||
return result
|
||||
end
|
||||
|
||||
local function calc_circuit(proxyheaderip)
|
||||
if not proxyheaderip then
|
||||
return
|
||||
end
|
||||
local cg = split(proxyheaderip, ":")
|
||||
local g1 = cg[5]
|
||||
local g2 = cg[6]
|
||||
|
||||
local glen = string.len(g1)
|
||||
if (glen < 4) then
|
||||
for i = (4 - glen),1,-1 do
|
||||
g1 = "0" .. g1
|
||||
::loop_label_1::
|
||||
end
|
||||
end
|
||||
glen = string.len(g2)
|
||||
if (glen < 4) then
|
||||
for i = (4 - glen),1,-1 do
|
||||
g2 = "0" .. g2
|
||||
::loop_label_2::
|
||||
end
|
||||
end
|
||||
|
||||
local d1 = (string.sub(g1,1,1) .. string.sub(g1,2,2))
|
||||
local d2 = (string.sub(g1,3,3) .. string.sub(g1,4,4))
|
||||
local d3 = (string.sub(g2,1,1) .. string.sub(g2,2,2))
|
||||
local d4 = (string.sub(g2,3,3) .. string.sub(g2,4,4))
|
||||
local circuit_id = ((((bit.lshift(tonumber(d1, 16), 24)) + (bit.lshift(tonumber(d2, 16), 16))) + (bit.lshift(tonumber(d3, 16), 8))) + tonumber(d4, 16))
|
||||
return circuit_id
|
||||
end
|
||||
|
||||
function kill_circuit(premature, clientip, headerip)
|
||||
local circuitid = calc_circuit(headerip)
|
||||
if not circuitid then
|
||||
return
|
||||
end
|
||||
local sockfile = "unix:/etc/tor/c1"
|
||||
local response = "Closing circuit " .. circuitid .. " "
|
||||
local sock = ngx.socket.tcp()
|
||||
sock:settimeout(1000)
|
||||
local ok, err = sock:connect(sockfile)
|
||||
if not ok then
|
||||
ngx.log(ngx.ERR, "failed to connect to tor: " .. err)
|
||||
return
|
||||
end
|
||||
ngx.log(ngx.ERR, "connected to tor")
|
||||
|
||||
local bytes, err = sock:send("authenticate \"torauthpassword\"\n")
|
||||
if not bytes then
|
||||
ngx.log(ngx.ERR, "failed authenticate to tor: " .. err)
|
||||
return
|
||||
end
|
||||
local data, err, partial = sock:receive()
|
||||
if not data then
|
||||
ngx.log(ngx.ERR, "failed receive data from tor: " .. err)
|
||||
return
|
||||
end
|
||||
local response = response .. " " .. data
|
||||
|
||||
local bytes, err = sock:send("closecircuit " .. circuitid .. "\n")
|
||||
if not bytes then
|
||||
ngx.log(ngx.ERR, "failed send data to tor: " .. err)
|
||||
return
|
||||
end
|
||||
local data, err, partial = sock:receive()
|
||||
if not data then
|
||||
ngx.log(ngx.ERR, "failed receive data from tor: " .. err)
|
||||
return
|
||||
end
|
||||
local response = response .. " " .. data
|
||||
|
||||
ngx.log(ngx.ERR, response)
|
||||
sock:close()
|
||||
return
|
||||
end
|
||||
}
|
||||
|
||||
#rate limits should be set to the maximum number of resources (css/images/iframes) a page will load. Those should be kept to a minimum for performance reasons
|
||||
#limiting by proxy_protocol_addr works only if tor is properly passing HiddenServiceExportCircuitID in haproxy form.
|
||||
#limiting by cookie_<name> works regardless and must be used, otherwise an attacker can solve a captcha by hand and add it to a script/bot to spam
|
||||
#limiting by X-I2P-DestHash works when using i2p and passing the request to nginx
|
||||
|
||||
#torconfiglimit_req_zone $proxy_protocol_addr zone=circuits:50m rate=6r/s;
|
||||
limit_req_zone $cookie_dcap zone=capcookie:50m rate=6r/s;
|
||||
#i2pconfiglimit_req_zone $http_x_i2p_desthash zone=i2pdesthash:50m rate=6r/s;
|
||||
|
||||
#caching of dynamic static elements (admin controlled only!)
|
||||
proxy_cache_path /etc/nginx/cache/ levels=1:2 keys_zone=static:60m use_temp_path=off max_size=500m;
|
||||
|
||||
#proxy_protocol only makes sense with V3 onions (exportcircuitid) otherwise it will break things.
|
||||
#kill_circuit can't be used without it
|
||||
server {
|
||||
#torconfig listen unix:/var/run/nginx1 proxy_protocol bind;
|
||||
#i2pconfig listen 127.0.0.1:6969 backlog=65536 reuseport;
|
||||
#i2pconfig allow 127.0.0.1;
|
||||
#torconfig allow unix:;
|
||||
deny all;
|
||||
|
||||
proxy_cache_key "$host$request_uri$is_args$args";
|
||||
proxy_cache_valid 200 1d;
|
||||
proxy_cache_min_uses 1;
|
||||
proxy_cache_use_stale error timeout invalid_header http_500 http_502 http_503 http_504;
|
||||
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
|
||||
proxy_set_header Host $host;
|
||||
proxy_cache_lock on;
|
||||
proxy_cache_background_update on;
|
||||
proxy_cache_revalidate on;
|
||||
proxy_cache_methods GET;
|
||||
|
||||
more_clear_headers 'Server:*';
|
||||
more_clear_headers 'Vary*';
|
||||
more_clear_headers 'kill*';
|
||||
|
||||
#the following is an example of how to cache static content on the front
|
||||
#this reduces the amount of requests and makes your site appear faster.
|
||||
# location /favicon.ico {
|
||||
# limit_except GET {
|
||||
# deny all;
|
||||
# }
|
||||
# proxy_cache static;
|
||||
# proxy_pass http://tor;
|
||||
# }
|
||||
#
|
||||
# location ~* ^/((images|fonts|css)/)?.*\.(ico|css|jpeg|jpg|png|ttf|webp|pdf)$ {
|
||||
# limit_except GET {
|
||||
# deny all;
|
||||
# }
|
||||
# proxy_cache static;
|
||||
# proxy_pass http://tor;
|
||||
# }
|
||||
|
||||
#what do do when rate limit is triggered, blacklist the cookie (if exists) and kill circuit
|
||||
location @ratelimit {
|
||||
error_log /var/log/nginx/ratelimit.log;
|
||||
access_by_lua_block {
|
||||
local pa = "no_proxy"
|
||||
if ngx.var.proxy_protocol_addr ~= nil then
|
||||
pa = ngx.var.proxy_protocol_addr
|
||||
end
|
||||
local cook = require "resty.cookie"
|
||||
local cookie, err = cook:new()
|
||||
if not cookie then
|
||||
ngx.log(ngx.ERR, err)
|
||||
return
|
||||
end
|
||||
local field, err = cookie:get("dcap")
|
||||
if field then
|
||||
local blocked_cookies = ngx.shared.blocked_cookies
|
||||
blocked_cookies:set(field, 1, sessionconfigvalue)
|
||||
end
|
||||
|
||||
ngx.log(ngx.ERR, "Rate limited " .. ngx.var.remote_addr .. "|" .. pa)
|
||||
|
||||
if pa ~= "no_proxy" then
|
||||
local ok, err = ngx.timer.at(0, kill_circuit, ngx.var.remote_addr, ngx.var.proxy_protocol_addr)
|
||||
if not ok then
|
||||
ngx.log(ngx.ERR, "failed to create timer: ", err)
|
||||
return
|
||||
end
|
||||
end
|
||||
ngx.exit(444)
|
||||
}
|
||||
}
|
||||
|
||||
#what do do when waf is triggered, just show the error page and kill circuit for now.
|
||||
#naxsi seems to kick in before everything else except rate limiter but if it does trash traffic won't make it to the application servers anyway
|
||||
#doesn't make sense to blacklist cookie as it will annoy users
|
||||
|
||||
location /waf {
|
||||
error_log /var/log/nginx/error.log;
|
||||
default_type text/html;
|
||||
content_by_lua_block {
|
||||
ngx.say("<head><title>Error</title></head>")
|
||||
ngx.say("<body bgcolor=\"white\">")
|
||||
ngx.say("<center><h1>Error</h1></center>")
|
||||
ngx.say("<hr><center><p>Your browser sent a request that this server could not understand.</p></center>")
|
||||
ngx.say("<center><p>Most likely your input contains invalid characters (\" , `, etc.) that except for passwords should not be used.</p></center>")
|
||||
ngx.say("<center><p>This may also happen if you are trying to send contact information or external links.</p></center>")
|
||||
ngx.say("<center><p>Please go back, check your input and try again.</p></center></body>")
|
||||
|
||||
proxyip = "no_proxy"
|
||||
torip = ngx.var.remote_addr
|
||||
if ngx.var.proxy_protocol_addr ~= nil then
|
||||
proxyip = ngx.var.proxy_protocol_addr
|
||||
end
|
||||
|
||||
ngx.log(ngx.ERR, "WAF triggered " .. torip .. "|" .. proxyip)
|
||||
if proxyip ~= "no_proxy" then
|
||||
local ok, err = ngx.timer.at(0, kill_circuit, torip, proxyip)
|
||||
if not ok then
|
||||
ngx.log(ngx.ERR, "failed to create timer: ", err)
|
||||
return
|
||||
end
|
||||
end
|
||||
}
|
||||
}
|
||||
|
||||
location @502 {
|
||||
default_type text/html;
|
||||
content_by_lua_block {
|
||||
ngx.say("<head><title>502 Timeout</title></head>")
|
||||
ngx.say("<body bgcolor=\"white\">")
|
||||
ngx.say("<center><h1>502 Timeout</h1></center>")
|
||||
ngx.say("<hr><center><p>It seems this endgame front doesn't have a stable connection to the backend right now.</p></center>")
|
||||
ngx.say("<center><p>To fix it you can try to reload the page. If that doesn't work, and you end back here, try the following:</p></center>")
|
||||
ngx.say("<center><p>On Tor, if getting a new circuit doesn't work, Try to get a brand new Tor identity. If that doesn't work come back later.</p></center>")
|
||||
ngx.say("<center><p>On I2P, just try and refresh again and again. If that doesn't work restart I2P and wait a couple minutes before trying again.</p></center></body>")
|
||||
}
|
||||
}
|
||||
|
||||
location /kill {
|
||||
access_by_lua_block {
|
||||
proxyip = "no_proxy"
|
||||
torip = ngx.var.remote_addr
|
||||
if ngx.var.proxy_protocol_addr ~= nil then
|
||||
proxyip = ngx.var.proxy_protocol_addr
|
||||
end
|
||||
|
||||
ngx.log(ngx.ERR, "Kill area visited" .. torip .. "|" .. proxyip)
|
||||
|
||||
local cook = require "resty.cookie"
|
||||
local cookie, err = cook:new()
|
||||
if not cookie then
|
||||
ngx.log(ngx.ERR, err)
|
||||
return
|
||||
end
|
||||
|
||||
local field, err = cookie:get("dcap")
|
||||
if field then
|
||||
local blocked_cookies = ngx.shared.blocked_cookies
|
||||
blocked_cookies:set(field, 1, sessionconfigvalue)
|
||||
end
|
||||
|
||||
if proxyip ~= "no_proxy" then
|
||||
local ok, err = ngx.timer.at(0, kill_circuit, torip, proxyip)
|
||||
if not ok then
|
||||
ngx.log(ngx.ERR, "failed to create timer: ", err)
|
||||
return
|
||||
end
|
||||
end
|
||||
ngx.exit(444)
|
||||
}
|
||||
}
|
||||
|
||||
location / {
|
||||
aio threads;
|
||||
aio_write on;
|
||||
#access_log /var/log/nginx/access.log;
|
||||
error_log /var/log/nginx/error.log;
|
||||
|
||||
#rate limits per circuit ID (prevents many requests on a single tor circuit)
|
||||
#torconfig limit_req zone=circuits burst=8 nodelay;
|
||||
#torconfig error_page 503 =503 @ratelimit;
|
||||
|
||||
#rate limits based on captcha cookie. if an attacker or bot solves the capcha by hand and inputs the cookie in a script
|
||||
#the cookie will be blacklisted by all fronts (eventually) and subsequent requests dropped.
|
||||
limit_req zone=capcookie burst=8 nodelay;
|
||||
error_page 503 =503 @ratelimit;
|
||||
|
||||
#rate limits based on the i2p destination hash (prevents many requests on a single i2p client connection) *DOES NOT KILL CIRCUITS*
|
||||
#i2pconfig limit_req zone=i2pdesthash burst=8 nodelay;
|
||||
#i2pconfig error_page 503 =503 @ratelimit;
|
||||
|
||||
error_page 502 =502 @502;
|
||||
|
||||
#check if access captca is solved and other things
|
||||
access_by_lua_file lua/cap.lua;
|
||||
|
||||
SecRulesEnabled;
|
||||
#LearningMode;
|
||||
DeniedUrl /waf;
|
||||
CheckRule "$SQL >= 8" BLOCK;
|
||||
CheckRule "$RFI >= 8" BLOCK;
|
||||
CheckRule "$TRAVERSAL >= 4" BLOCK;
|
||||
CheckRule "$EVADE >= 4" BLOCK;
|
||||
CheckRule "$XSS >= 8" BLOCK;
|
||||
include "/etc/nginx/naxsi_whitelist.rules";
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://backendurl;
|
||||
header_filter_by_lua_block {
|
||||
local cookie, err = cook:new()
|
||||
if not cookie then
|
||||
ngx.log(ngx.ERR, err)
|
||||
return
|
||||
end
|
||||
|
||||
if ngx.resp.get_headers()['kill'] ~= nil then
|
||||
local field, err = cookie:get("dcap")
|
||||
if field then
|
||||
local blocked_cookies = ngx.shared.blocked_cookies
|
||||
blocked_cookies:set(field, 1, sessionconfigvalue)
|
||||
end
|
||||
end
|
||||
}
|
||||
}
|
||||
}
|
22
endgamefiles/sourcecode/captcha-source/Cargo.toml
Normal file
22
endgamefiles/sourcecode/captcha-source/Cargo.toml
Normal file
@ -0,0 +1,22 @@
|
||||
[package]
|
||||
name = "captcha"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
|
||||
[dependencies]
|
||||
base64 = "0.21.0"
|
||||
image = "0.24.6"
|
||||
imageproc = "0.23.0"
|
||||
rand = "0.8.5"
|
||||
rusttype = "0.9.3"
|
||||
webp = "0.2.2"
|
||||
|
||||
[build]
|
||||
target = "x86_64-unknown-linux-musl"
|
||||
|
||||
[profile.release]
|
||||
lto = true
|
||||
opt-level = 3
|
||||
codegen-units = 1
|
||||
panic = 'abort'
|
||||
strip = true
|
3
endgamefiles/sourcecode/captcha-source/rustbuild.sh
Normal file
3
endgamefiles/sourcecode/captcha-source/rustbuild.sh
Normal file
@ -0,0 +1,3 @@
|
||||
#!/bin/bash
|
||||
|
||||
cargo build --release --target=x86_64-unknown-linux-musl
|
5
endgamefiles/sourcecode/captcha-source/rustsetup.sh
Normal file
5
endgamefiles/sourcecode/captcha-source/rustsetup.sh
Normal file
@ -0,0 +1,5 @@
|
||||
#!/bin/bash
|
||||
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
apt-get install musl-gcc
|
||||
rustup target add x86_64-unknown-linux-musl
|
BIN
endgamefiles/sourcecode/captcha-source/src/font.ttf
Normal file
BIN
endgamefiles/sourcecode/captcha-source/src/font.ttf
Normal file
Binary file not shown.
526
endgamefiles/sourcecode/captcha-source/src/main.rs
Normal file
526
endgamefiles/sourcecode/captcha-source/src/main.rs
Normal file
@ -0,0 +1,526 @@
|
||||
extern crate image;
|
||||
extern crate imageproc;
|
||||
extern crate rand;
|
||||
extern crate rusttype;
|
||||
extern crate base64;
|
||||
extern crate webp;
|
||||
|
||||
use image::{ImageBuffer, Rgb, Rgba, RgbImage, RgbaImage};
|
||||
use webp::{Encoder, WebPMemory};
|
||||
use imageproc::pixelops::interpolate;
|
||||
use imageproc::drawing::{draw_antialiased_line_segment_mut};
|
||||
use imageproc::geometric_transformations::rotate_about_center;
|
||||
use imageproc::geometric_transformations::Interpolation;
|
||||
use rand::{Rng};
|
||||
use rusttype::*;
|
||||
use std::fs;
|
||||
use base64::{Engine as _, engine::{general_purpose}};
|
||||
//use base64::{Engine as _, engine::{self, general_purpose}, alphabet};
|
||||
|
||||
fn main() {
|
||||
let base = std::env::current_dir().unwrap();
|
||||
|
||||
let mut challenge_return = Vec::new();
|
||||
for i in 0..50 {
|
||||
challenge_return.push(generate_challenge(i));
|
||||
}
|
||||
|
||||
let mut challenge_output = String::from("challengeArray = {} \r\n");
|
||||
for (key, challenge) in challenge_return.iter().enumerate() {
|
||||
challenge_output.push_str(&format!("challengeArray[{}] = {}\r\n", key, challenge));
|
||||
}
|
||||
|
||||
let challenge_file = base.join("challenge.lua");
|
||||
fs::write(challenge_file, challenge_output).expect("write fail");
|
||||
}
|
||||
|
||||
fn generate_challenge(_i: usize) -> String {
|
||||
//println!("debug challenge {}", _i);
|
||||
|
||||
let mut rng = rand::thread_rng();
|
||||
|
||||
// dimension
|
||||
let width = 160;
|
||||
let height = 160;
|
||||
|
||||
// Charset
|
||||
let abc: &str = "ACDEFGHIJKLMNPQRSTUVWXYZ12345679";
|
||||
|
||||
let mut img = ImageBuffer::<Rgb<u8>, _>::new(width, height);
|
||||
let colour = [26, 30, 35, 255];
|
||||
let bg_colour = image::Rgba(colour);
|
||||
for pixel in img.pixels_mut() {
|
||||
*pixel = Rgb([bg_colour[0], bg_colour[1], bg_colour[2]]);
|
||||
}
|
||||
|
||||
//println!("debug {}", 1);
|
||||
|
||||
let mut colours = Vec::new();
|
||||
for _ in 0..4 {
|
||||
let mut new_colour = [rng.gen_range(90..=255), rng.gen_range(90..=255), rng.gen_range(90..=255), 255];
|
||||
new_colour[rng.gen_range(0..2)] = rng.gen_range(180..=255);
|
||||
|
||||
colours.push(image::Rgba(new_colour));
|
||||
}
|
||||
|
||||
// Load font file
|
||||
let select_font = pick_font();
|
||||
|
||||
let font: Font = Font::try_from_bytes(select_font.font_data).unwrap();
|
||||
|
||||
let font_size = rng.gen_range(select_font.min..=select_font.max) as f32;
|
||||
|
||||
let fake_font_size = font_size * rng.gen_range(0.5..1.5);
|
||||
|
||||
//println!("debug {}", 2);
|
||||
|
||||
// Remove X exclusive line colours
|
||||
let mut line_colours = Vec::new();
|
||||
for _ in 0..2 {
|
||||
line_colours.push(colours.pop().unwrap());
|
||||
}
|
||||
|
||||
// Draw a few Arcs
|
||||
let arc_max = rng.gen_range(25..=35);
|
||||
for _ in 0..arc_max {
|
||||
// using possible line colours
|
||||
let mut colour = line_colours[rng.gen_range(0..colours.len())];
|
||||
|
||||
if rng.gen_range(0..100) < 25 {
|
||||
// using possible valid charset colours
|
||||
colour = colours[rng.gen_range(0..colours.len())];
|
||||
}
|
||||
|
||||
let x = rng.gen_range(0..width) as i32;
|
||||
let y = rng.gen_range(0..height) as i32;
|
||||
let start_angle = rng.gen_range(0.0..360.0);
|
||||
let end_angle = rng.gen_range(0.0..360.0);
|
||||
let radius = rng.gen_range(1..width / 2) as i32;
|
||||
let thickness = rng.gen_range(1..=3);
|
||||
|
||||
let colour_rgb = Rgb([colour[0], colour[1], colour[2]]);
|
||||
draw_arc(&mut img, (x, y), radius, start_angle, end_angle, colour_rgb, thickness);
|
||||
}
|
||||
|
||||
// Draw a few Arcs using line colours
|
||||
/*let arc_max = rng.gen_range(15..=25);
|
||||
for _ in 0..arc_max {
|
||||
let colour = line_colours[rng.gen_range(0..colours.len())];
|
||||
let x = rng.gen_range(0..width) as i32;
|
||||
let y = rng.gen_range(0..height) as i32;
|
||||
let start_angle = rng.gen_range(0.0..360.0);
|
||||
let end_angle = rng.gen_range(0.0..360.0);
|
||||
let radius = rng.gen_range(1..width / 2) as i32;
|
||||
let thickness = rng.gen_range(1..=3);
|
||||
|
||||
let colour_rgb = Rgb([colour[0], colour[1], colour[2]]);
|
||||
draw_arc(&mut img, (x, y), radius, start_angle, end_angle, colour_rgb, thickness);
|
||||
}*/
|
||||
|
||||
//println!("debug {}", 3);
|
||||
|
||||
// Draw lines
|
||||
/*let lines_max = rng.gen_range(15..=25);
|
||||
for _ in 0..lines_max {
|
||||
let random_colour = line_colours[rng.gen_range(0..line_colours.len())];
|
||||
//let thickness = rng.gen_range(0..2);
|
||||
let x1 = rng.gen_range(0.0..width as f32);
|
||||
let y1 = rng.gen_range(0.0..height as f32);
|
||||
let x2 = rng.gen_range(0.0..width as f32);
|
||||
let y2 = rng.gen_range(0.0..height as f32);
|
||||
|
||||
let random_colour_rgb = Rgb([random_colour[0], random_colour[1], random_colour[2]]);
|
||||
draw_line_segment_mut(&mut img, (x1, y1), (x2, y2), random_colour_rgb);
|
||||
}*/
|
||||
|
||||
//println!("debug {}", 4);
|
||||
|
||||
// Draw fake characters
|
||||
let char_max = rng.gen_range(60..=80);
|
||||
for _ in 0..char_max {
|
||||
let random_char = abc.chars().nth(rng.gen_range(0..abc.len())).unwrap();
|
||||
let rotation = rng.gen_range(0.0..360.0);
|
||||
let random_colour = line_colours[rng.gen_range(0..line_colours.len())];
|
||||
|
||||
let offset_x = rng.gen_range(5.0..(width as f32 - 5.0));
|
||||
let offset_y = rng.gen_range(5.0..(height as f32 - 5.0));
|
||||
|
||||
|
||||
draw_text(&mut img, &random_char.to_string(), &font, fake_font_size, random_colour, offset_x, offset_y, (rotation as f32).to_radians());
|
||||
}
|
||||
|
||||
//println!("debug {}", 5);
|
||||
|
||||
// Draw real characters
|
||||
let mut char_map = vec![];
|
||||
let mut x_pos = rng.gen_range(4.0..6.0);
|
||||
|
||||
while x_pos < (width as f32 - (font_size * 1.5)) {
|
||||
let mut y_pos = rng.gen_range(4.0..6.0);
|
||||
|
||||
while y_pos < (height as f32 - (font_size * 1.5)) {
|
||||
let y_buffer = rng.gen_range(-1.0..=8.0);
|
||||
let random_char = abc.chars().nth(rng.gen_range(0..abc.len())).unwrap();
|
||||
|
||||
y_pos += y_buffer;
|
||||
|
||||
let mut x_buffer = x_pos + rng.gen_range(-1.0..=8.0);
|
||||
let rotation = rng.gen_range(0.0..60.0);
|
||||
let random_colour = colours[rng.gen_range(0..colours.len())];
|
||||
|
||||
//x_buffer += rng.gen_range(0.0..=1.0);
|
||||
//y_pos += rng.gen_range(0.0..=1.0);
|
||||
|
||||
if x_buffer < 0.0 {
|
||||
x_buffer = 0.0;
|
||||
}
|
||||
if y_pos < 0.0 {
|
||||
y_pos = 0.0;
|
||||
}
|
||||
|
||||
draw_text(&mut img, &random_char.to_string(), &font, font_size, random_colour, x_buffer, y_pos, (rotation as f32).to_radians());
|
||||
draw_text(&mut img, &random_char.to_string(), &font, font_size, random_colour, x_buffer + 1.0, y_pos, (rotation as f32).to_radians());
|
||||
|
||||
//let red_pixel = Rgb([255, 0, 0]);
|
||||
//img.put_pixel(x_buffer as u32, y_pos as u32, red_pixel);
|
||||
|
||||
//println!("character {} pos {}, {}", random_char.to_string(), x_buffer, y_pos);
|
||||
|
||||
let rotation_offset: f32 = rng.gen_range(-8.0..=8.0);
|
||||
let offset_x = rng.gen_range(-1.0..=1.0);
|
||||
let offset_y = rng.gen_range(-1.0..=1.0);
|
||||
|
||||
//let rotation_offset: f32 = 0.0;
|
||||
//let offset_x = 0.0;
|
||||
//let offset_y = 0.0;
|
||||
|
||||
char_map.push( CharInfo{
|
||||
x_pos: x_buffer + offset_x,
|
||||
y_pos: y_pos + offset_y,
|
||||
font_size: font_size,
|
||||
text: random_char.to_string(),
|
||||
rotation: rotation + rotation_offset,
|
||||
xcalc: String::from(""),
|
||||
ycalc: String::from(""),
|
||||
dcalc: String::from(""),
|
||||
});
|
||||
|
||||
y_pos += (font_size * 1.3) as f32;
|
||||
|
||||
}
|
||||
|
||||
x_pos += 4.0;
|
||||
x_pos += (font_size * 1.3) as f32;
|
||||
}
|
||||
|
||||
//println!("debug {}", 6);
|
||||
|
||||
// Calculate passcode
|
||||
let mut passcode = Vec::new();
|
||||
|
||||
for _ in 0..6 {
|
||||
let rand_index = rng.gen_range(0..char_map.len());
|
||||
let mut rand = char_map.remove(rand_index);
|
||||
|
||||
rand.x_pos -= (rand.font_size as f32) + ((9.0 - rand.font_size as f32) * 1.1);
|
||||
rand.y_pos -= rand.font_size as f32 + ((13.0 - rand.font_size as f32) * 1.1);
|
||||
|
||||
if rand.x_pos < 0.0 {
|
||||
rand.x_pos = 0.0;
|
||||
}
|
||||
if rand.y_pos < 0.0 {
|
||||
rand.y_pos = 0.0;
|
||||
}
|
||||
|
||||
//println!("character {} pos {}, {} rotate {}", rand.text, rand.x_pos, rand.y_pos, rand.rotation);
|
||||
//let red_pixel = Rgb([255, 0, 0]);
|
||||
//img.put_pixel(rand.x_pos as u32, rand.y_pos as u32, red_pixel);
|
||||
|
||||
rand.xcalc = calc_pos(width as f32 - rand.x_pos);
|
||||
rand.ycalc = calc_pos(height as f32 - rand.y_pos);
|
||||
rand.dcalc = calc_degrees(rand.rotation);
|
||||
|
||||
passcode.push(rand);
|
||||
}
|
||||
|
||||
//println!("debug {}", 7);
|
||||
|
||||
// Passcode input
|
||||
let mut html = String::new();
|
||||
for i in 0..passcode.len() {
|
||||
html.push_str(&format!(
|
||||
"input[name=c{}]:focus ~ .image {{
|
||||
background-position: calc({}) calc({});
|
||||
transform: rotate(calc({})) scale(6) !important;
|
||||
}}\n",
|
||||
i + 1,
|
||||
passcode[i].xcalc,
|
||||
passcode[i].ycalc,
|
||||
passcode[i].dcalc
|
||||
));
|
||||
}
|
||||
|
||||
let captcha_code: String = passcode.iter().map(|code| code.text.clone()).collect();
|
||||
let compressed_html = html.split_whitespace().collect::<Vec<&str>>().join(" ");
|
||||
|
||||
let image_data: &[u8] = &img.into_raw();
|
||||
|
||||
let encoder = Encoder::from_rgb(image_data, width, height);
|
||||
let encoded_webp: WebPMemory = encoder.encode(70.0);
|
||||
|
||||
let webp_data_slice: &[u8] = &*encoded_webp;
|
||||
|
||||
let base64_encoded_webp = general_purpose::STANDARD.encode(webp_data_slice);
|
||||
//const CUSTOM_ENGINE: engine::GeneralPurpose = engine::GeneralPurpose::new(&alphabet::URL_SAFE, general_purpose::NO_PAD);
|
||||
//let base64_encoded_webp = CUSTOM_ENGINE.encode(webp_data_slice);
|
||||
|
||||
format!(
|
||||
"\"{}*{}*{}\"\r\n",
|
||||
compressed_html,
|
||||
captcha_code,
|
||||
base64_encoded_webp
|
||||
)
|
||||
}
|
||||
|
||||
struct CharInfo {
|
||||
x_pos: f32,
|
||||
y_pos: f32,
|
||||
font_size: f32,
|
||||
text: String,
|
||||
rotation: f32,
|
||||
xcalc: String,
|
||||
ycalc: String,
|
||||
dcalc: String,
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
struct CustomFont<'a> {
|
||||
font_data: &'a [u8],
|
||||
min: u32,
|
||||
max: u32,
|
||||
}
|
||||
|
||||
fn pick_font<'a>() -> CustomFont<'a> {
|
||||
let font = CustomFont {
|
||||
font_data: include_bytes!("font.ttf") as &[u8],
|
||||
min: 12,
|
||||
max: 17,
|
||||
};
|
||||
|
||||
let plain = CustomFont {
|
||||
font_data: include_bytes!("plain.ttf") as &[u8],
|
||||
min: 13,
|
||||
max: 15,
|
||||
};
|
||||
|
||||
let fonts = vec![font, plain];
|
||||
|
||||
let mut rng = rand::thread_rng();
|
||||
let selected_font = fonts[rng.gen_range(0..fonts.len())].clone();
|
||||
|
||||
selected_font
|
||||
}
|
||||
|
||||
fn draw_text(img: &mut RgbImage, text: &str, font: &Font, font_size: f32, colour: Rgba<u8>, x: f32, y: f32, rotation: f32) {
|
||||
let scale = Scale { x: font_size * 1.3, y: font_size * 1.3 };
|
||||
let v_metrics = font.v_metrics(scale);
|
||||
|
||||
let text_width: f32 = text
|
||||
.chars()
|
||||
.map(|c| {
|
||||
let glyph = font.glyph(c);
|
||||
let scaled_glyph = glyph.scaled(scale);
|
||||
let h_metrics = scaled_glyph.h_metrics();
|
||||
h_metrics.advance_width
|
||||
})
|
||||
.sum();
|
||||
|
||||
let text_height = v_metrics.ascent - v_metrics.descent + v_metrics.line_gap;
|
||||
|
||||
let original_bounding_box_size = ((text_width.powi(2) + text_height.powi(2)).sqrt()).ceil() as u32;
|
||||
let mut text_img = RgbaImage::new(original_bounding_box_size, original_bounding_box_size);
|
||||
|
||||
let mut x_offset = (original_bounding_box_size as f32 - text_width) / 3.0;
|
||||
let y_offset = (original_bounding_box_size as f32 + text_height) / 3.0;
|
||||
|
||||
for c in text.chars() {
|
||||
let glyph = font.glyph(c);
|
||||
let scaled_glyph = glyph.scaled(scale);
|
||||
let h_metrics = scaled_glyph.h_metrics();
|
||||
let x_position = x_offset + h_metrics.left_side_bearing;
|
||||
|
||||
let positioned_glyph = scaled_glyph.positioned(Point {
|
||||
x: x_position,
|
||||
y: y_offset,
|
||||
});
|
||||
|
||||
draw_glyph(&mut text_img, &positioned_glyph, colour);
|
||||
|
||||
x_offset += h_metrics.advance_width;
|
||||
}
|
||||
|
||||
let rotated_text_img = if rotation != 0.0 {
|
||||
rotate_about_center(&text_img, rotation, Interpolation::Bilinear, Rgba([0, 0, 0, 0]))
|
||||
} else {
|
||||
text_img
|
||||
};
|
||||
|
||||
let x_min = (x - (rotated_text_img.width() as f32) / 2.0).round() as i32;
|
||||
let y_min = (y - (rotated_text_img.height() as f32) / 2.0).round() as i32;
|
||||
|
||||
alpha_overlay(img, &rotated_text_img, x_min.max(0) as u32, y_min.max(0) as u32);
|
||||
}
|
||||
|
||||
fn draw_arc(img: &mut RgbImage, centre: (i32, i32), radius: i32, start_angle: f32, end_angle: f32, colour: Rgb<u8>, thickness: i32) {
|
||||
for offset in -(thickness / 2)..=(thickness / 2) {
|
||||
let current_radius = radius + offset;
|
||||
let num_points = (
|
||||
current_radius as f32
|
||||
* (end_angle - start_angle).abs()
|
||||
* std::f32::consts::PI
|
||||
/ 180.0
|
||||
).round() as usize;
|
||||
let angle_diff = (end_angle - start_angle) / num_points as f32;
|
||||
|
||||
let mut previous_point = Point{
|
||||
x: centre.0 + (radius as f32 * start_angle.to_radians().cos()).round() as i32,
|
||||
y: centre.1 - (radius as f32 * start_angle.to_radians().sin()).round() as i32
|
||||
};
|
||||
|
||||
for i in 1..=num_points {
|
||||
let angle = start_angle + angle_diff * i as f32;
|
||||
let x = centre.0 + (radius as f32 * angle.to_radians().cos()).round() as i32;
|
||||
let y = centre.1 - (radius as f32 * angle.to_radians().sin()).round() as i32;
|
||||
let current_point = Point{x, y};
|
||||
|
||||
draw_antialiased_line_segment_mut(
|
||||
img,
|
||||
(previous_point.x, previous_point.y),
|
||||
(current_point.x, current_point.y),
|
||||
colour,
|
||||
interpolate,
|
||||
);
|
||||
|
||||
previous_point = current_point;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn blend(src: Rgba<u8>, dst: Rgba<u8>) -> Rgba<u8> {
|
||||
let alpha_src = src[3] as f32 / 255.0;
|
||||
let alpha_dst = dst[3] as f32 / 255.0;
|
||||
let alpha_out = alpha_src + alpha_dst * (1.0 - alpha_src);
|
||||
|
||||
if alpha_out == 0.0 {
|
||||
Rgba([0, 0, 0, 0])
|
||||
} else {
|
||||
let r_src = src[0] as f32 * alpha_src;
|
||||
let g_src = src[1] as f32 * alpha_src;
|
||||
let b_src = src[2] as f32 * alpha_src;
|
||||
let r_dst = dst[0] as f32 * alpha_dst * (1.0 - alpha_src);
|
||||
let g_dst = dst[1] as f32 * alpha_dst * (1.0 - alpha_src);
|
||||
let b_dst = dst[2] as f32 * alpha_dst * (1.0 - alpha_src);
|
||||
|
||||
// Rgba([
|
||||
// ((r_src + r_dst) / alpha_out) as u8,
|
||||
// ((g_src + g_dst) / alpha_out) as u8,
|
||||
// ((b_src + b_dst) / alpha_out) as u8,
|
||||
// (alpha_out * 255.0) as u8,
|
||||
// ])
|
||||
Rgba([
|
||||
src[0] as u8,
|
||||
src[1] as u8,
|
||||
src[2] as u8,
|
||||
src[3] as u8,
|
||||
])
|
||||
}
|
||||
}
|
||||
|
||||
fn draw_glyph(img: &mut RgbaImage, glyph: &PositionedGlyph, colour: Rgba<u8>) {
|
||||
if let Some(bb) = glyph.pixel_bounding_box() {
|
||||
glyph.draw(|x, y, v| {
|
||||
let x = x as i32 + bb.min.x;
|
||||
let y = y as i32 + bb.min.y;
|
||||
if x >= 0 && y >= 0 && x < img.width() as i32 && y < img.height() as i32 {
|
||||
let pixel = img.get_pixel_mut(x as u32, y as u32);
|
||||
let src_color = Rgba([
|
||||
colour[0],
|
||||
colour[1],
|
||||
colour[2],
|
||||
(v * 255.0) as u8,
|
||||
]);
|
||||
let blended_color = blend(src_color, *pixel);
|
||||
*pixel = blended_color;
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
fn calc_pos(mut remain: f32) -> String {
|
||||
let mut math = String::from("0px");
|
||||
|
||||
while remain > 0.0 {
|
||||
let float = rand::thread_rng().gen_range(0.0..1.0) / 3.0;
|
||||
if remain - float > 0.0 {
|
||||
remain -= float;
|
||||
math.push_str(&format!(" + {:.2}px", float));
|
||||
} else {
|
||||
math.push_str(&format!(" + {:.2}px", remain));
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
math
|
||||
//format!("{:.2}px", remain)
|
||||
}
|
||||
|
||||
fn calc_degrees(mut remain: f32) -> String {
|
||||
let mut degrees = String::from("0deg");
|
||||
|
||||
while remain > 0.0 {
|
||||
let float = rand::thread_rng().gen_range(0.0..2.0);
|
||||
if remain - float > 0.0 {
|
||||
remain -= float;
|
||||
degrees.push_str(&format!(" - {:.2}deg", float));
|
||||
} else {
|
||||
degrees.push_str(&format!(" - {:.2}deg", remain));
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
degrees
|
||||
//format!("{:.2}deg", remain)
|
||||
}
|
||||
|
||||
fn alpha_overlay(dest: &mut RgbImage, src: &RgbaImage, x_offset: u32, y_offset: u32) {
|
||||
for (x, y, px_rgba) in src.enumerate_pixels() {
|
||||
let alpha = px_rgba[3] as f32 / 255.0;
|
||||
if alpha > 0.0 {
|
||||
let px_rgb = Rgb([
|
||||
px_rgba[0],
|
||||
px_rgba[1],
|
||||
px_rgba[2],
|
||||
]);
|
||||
|
||||
let x_dest = x + x_offset;
|
||||
let y_dest = y + y_offset;
|
||||
|
||||
if x_dest < dest.width() && y_dest < dest.height() {
|
||||
let dest_px = dest.get_pixel_mut(x_dest, y_dest);
|
||||
|
||||
if alpha >= 1.0 {
|
||||
dest_px[0] = px_rgb[0];
|
||||
dest_px[1] = px_rgb[1];
|
||||
dest_px[2] = px_rgb[2];
|
||||
} else {
|
||||
let inverse_alpha = 1.0 - alpha;
|
||||
dest_px[0] = (dest_px[0] as f32 * inverse_alpha + px_rgb[0] as f32 * alpha).round() as u8;
|
||||
dest_px[1] = (dest_px[1] as f32 * inverse_alpha + px_rgb[1] as f32 * alpha).round() as u8;
|
||||
dest_px[2] = (dest_px[2] as f32 * inverse_alpha + px_rgb[2] as f32 * alpha).round() as u8;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
BIN
endgamefiles/sourcecode/captcha-source/src/plain.ttf
Normal file
BIN
endgamefiles/sourcecode/captcha-source/src/plain.ttf
Normal file
Binary file not shown.
102
endgamefiles/sourcecode/gobalance/README.md
Normal file
102
endgamefiles/sourcecode/gobalance/README.md
Normal file
@ -0,0 +1,102 @@
|
||||
# GoBalance Enhanced
|
||||
|
||||
GoBalance is a rewrite of [onionbalance](https://onionbalance.readthedocs.io) written in Golang by n0tr1v.
|
||||
|
||||
The enhanced version has several customizations on top of the rewrite specifically designed for high traffic onion sites.
|
||||
|
||||
### Pros over Python version
|
||||
|
||||
- Fast multicore threaded design
|
||||
- Async Communication layer with the Tor process
|
||||
- Can be complied to a single binary for container use
|
||||
- Can be used as a library in a go app
|
||||
|
||||
### Pros over forked version
|
||||
|
||||
- First class distinct descriptor support!
|
||||
- Option to adaptively tune and adjust the timings of both fetching and pushing descriptors. Safer and no more wondering about tuning params!
|
||||
- Option for tight timings on introduction rotation. Better results than introduction spam while having more front safety (removed because of correlation potential contact /u/Paris on dread if absolutely required.)
|
||||
- Ability to split the descriptor push process of multiple different gobalance and Tor processes, allowing for more scalability
|
||||
- Smart introduction selection based upon the now "fresh" the descriptor is and how many instances are active
|
||||
- 'Strict' checks to see if configuration is optimal for best performance with easy to understand and follow messages if not
|
||||
|
||||
# Tuning Methodology
|
||||
|
||||
The goal of any tuning is to have better results. The results we are looking for in gobalance are as follows:
|
||||
|
||||
- Fetch and Publish descriptors at the most ideal times to heighten availability and overall reachability of an onion service
|
||||
|
||||
Now the question is, what's "most ideal times" and how do we know if it's making the service more available and reachable?
|
||||
|
||||
Being that a configuration can have tens to hundreds of instances the most ideal times vastly changes. It's impossible to realistically set reasonable default configuration params with such variability.
|
||||
|
||||
This is why most of the tuning methodology deals with changing and tightening the params based on both the amount of instances but also network conditions.
|
||||
|
||||
There are inherent limitations on the Tor network (tor-c) when dealing with onion services:
|
||||
|
||||
- Introduction points have a set limit of "life". After 16384 to 32768 (random value between these) introduce2 cells the introduction point expires. They also expire due to old age between 18 and 24 hours. These values can be seen in /src/core/or/or.h of the tor source code.
|
||||
- A max of 20 introduction points per descriptor
|
||||
- Set HSDIR relays based upon a changing network consensus randomization value
|
||||
- Single circuit, single descriptor push/fetch (meaning you need to create a new circuit every time you want to do stuff)
|
||||
- Latency to build the circuits
|
||||
- No quick way to check if an introduction point is still active or not
|
||||
- We need to build a circuit to HSDIR to both get descriptors and push descriptors (which may or may not return the correct results)
|
||||
- Soft limit of 32 general-purpose pending circuits which limit the overall scalability of an onionbalance process
|
||||
|
||||
It's impossible to overcome all these limitations completely. But that isn't to say we can't make improvements in the way gobalance handles the Tor network.
|
||||
|
||||
For example the 20 introduction point maximum can be sidestepped if different descriptors are pushed to different HSDIR. By default, the Tor process publishes the same descriptor to all assigned HSDIR (based on network consensus value). With distinct descriptors we publish distinct descriptors (good name, right?) to all assigned HSDIR. So instead of one descriptor publish process at max pushing 20 introduction points, we push at max 20 introduction points PER HSDIR. Generally there is 8 HSDIR per consensus that means 180 introduction points. A max of 180 instances individual load balancing. Technically there is enough space in the descriptor to fit 30 introduction points instead of 20. But 20 is hard coded as a limit. Why? Because someone didn't do the math.
|
||||
|
||||
Anyway this has distinct descriptors built directly in to give the largest spread of introduction points on the Tor network. Up to 8 times more reachability!
|
||||
|
||||
There is also tuning we do when it comes to when we both fetch and push these descriptors. Traditionally there is a set value that would be hard coded for this. But that makes little sense because some people might have just one or a few hundred fronts. So gobalance, on boot, records the time it takes for the Tor process to get the descriptors of all the configured fronts. It then does some simple calculations to base the fetching and descriptor pushes in a way more optimal way. It's not perfect, but it does automatically account for the variability in the Tor network. Which is much better than what onionbalance was traditionally doing; nothing.
|
||||
|
||||
We also tune which descriptors are pushed to the network. Accounting for the most recently received descriptor to be first on the list. Under regular onion load this would be not optimal (being that it becomes obvious you are running gobalance), but this fork is not designed for regular load. It's designed to be used on high traffic onion service sites. The most recent valid descriptor received has the highest potential of being the most reachable under DDOS attack. Of course this isn't perfect either but has shown considerably better outcomes under high load situations with minor load balancing implications under regular load.
|
||||
|
||||
Distinct descriptors allow us to push different kinds of introduction points but that doesn't help if we are not able to get the introduction points fast enough. If you had 180 instances it takes time for a Tor process to grab the latest introduction points from all of them. With this fork you can use the up to 8 gobalance and tor individual processes to split the load of both getting the introduction points and pushing the descriptors. The way we do this is simple. We limit which HSDIRs the gobalance process thinks is responsible based on the placement around the ring. Being that all tor processes will have the same consensus the selection will be the same. This means there are zero overlapping processes which conflict with each other allowing for much higher availability potential. You can effectively have 32 fronts on each gobalance processes for each of the 8 HSDIR. That's 256 fronts where only 180 of them are active in a single time. Allowing for front recovery, the best overall performance, and a much higher refresh rate timings. When maxed out on the latest Ryzen processors it's possible to handle a high tens of thousands of circuits per second all together. That number goes up with the optimizations from endgame.
|
||||
|
||||
TLDR: There are optimizations in both the fetching, selecting, and pushing of introduction points to the Tor network allowing for better reachability for onion services of all sizes. More valuable for large ones which are getting DDOSED to death.
|
||||
# Boot Config Flags
|
||||
You can see all these by running
|
||||
- `./gobalance --help`
|
||||
|
||||
|
||||
- --ip value, -i value Tor control IP address (default: "127.0.0.1")
|
||||
- --port value, -p value Tor control port (default: 9051)
|
||||
- --torPassword value, --tor-password value Tor control password
|
||||
- --config value, -c value Config file location (default: "config.yaml")
|
||||
- --quick, -q Quickly publish a new descriptor (for HSDIR descriptor failures/tests) (default: false)
|
||||
- --adaptive, -a Adaptive publishing changes the way descriptors are published to prioritize descriptor rotation on the HSDIR. A counter to introduction cell attacks (with enough scale) and a more private version of introduction spamming. (default: true)
|
||||
- --strict, -s Strictly adhere to adaptive algorithms and, at the start, panic if non-optimal conditions are found. (default: false)
|
||||
- --dirsplit value, --ds value Splits the descriptor submission to the network. Allowing for multiple gobalance processes to work as a single noncompetitive unit. This allows for more flexible scaling on fronts as many Tor processes can be safely used. Valid values are ranges (like 1-2 or 3-8). Cover all ranges from 1-8 on all processes! The default is 1-8. (default: "1-8")
|
||||
- --verbosity value, --vv value Minimum verbosity level for logging. Available in ascending order: debug, info, warning, error, critical). The default is info. (default: "info")
|
||||
- --help, -h show help (default: false)
|
||||
- --version, -v print the version (default: false)
|
||||
|
||||
# Compiling
|
||||
|
||||
- `go get -u` - updates all dependencies
|
||||
- `go mod vendor` - stores the updates in the vendor folder
|
||||
- `go build -o gobalance main.go` - builds the gobalance application
|
||||
|
||||
# Generate Configuration
|
||||
|
||||
- `./gobalance g`
|
||||
|
||||
or simply use your python onionbalance one! Drop in replacement support (no multisite)!
|
||||
|
||||
# Running
|
||||
After you have configured your gobalance, you will need a tor process on your localhost. There is a provided torrc file. Run it with Tor like this:
|
||||
|
||||
- `tor -f torrc`
|
||||
|
||||
After that run gobalance
|
||||
|
||||
- `./gobalance`
|
||||
|
||||
If you need to run these in the background (in the event your server connection dies or drops) you can use `nohup` or a detached terminal session.
|
||||
I, /u/Paris, recommend just running it locally with geo redundancy to not need to worry about server crashes or compromises. Onion key safety is your absolute priority. When it's compromised your operation is done.
|
||||
|
||||
# Notes
|
||||
|
||||
POW is around the corner and this gobalance process does not parse the new POW descriptor variables. After POW is released to the network an update will need to come.
|
21
endgamefiles/sourcecode/gobalance/go.mod
Normal file
21
endgamefiles/sourcecode/gobalance/go.mod
Normal file
@ -0,0 +1,21 @@
|
||||
module gobalance
|
||||
|
||||
go 1.18
|
||||
|
||||
require (
|
||||
github.com/sirupsen/logrus v1.9.3
|
||||
github.com/stretchr/testify v1.8.0
|
||||
github.com/urfave/cli/v2 v2.25.7
|
||||
golang.org/x/crypto v0.13.0
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
maze.io/x/crypto v0.0.0-20190131090603-9b94c9afe066
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.2 // indirect
|
||||
github.com/davecgh/go-spew v1.1.1 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/russross/blackfriday/v2 v2.1.0 // indirect
|
||||
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 // indirect
|
||||
golang.org/x/sys v0.12.0 // indirect
|
||||
)
|
47
endgamefiles/sourcecode/gobalance/go.sum
Normal file
47
endgamefiles/sourcecode/gobalance/go.sum
Normal file
@ -0,0 +1,47 @@
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.2 h1:p1EgwI/C7NhT0JmVkwCD2ZBK8j4aeHQX2pMHHBfMQ6w=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
|
||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=
|
||||
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
|
||||
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/urfave/cli/v2 v2.23.7 h1:YHDQ46s3VghFHFf1DdF+Sh7H4RqhcM+t0TmZRJx4oJY=
|
||||
github.com/urfave/cli/v2 v2.23.7/go.mod h1:GHupkWPMM0M/sj1a2b4wUrWBPzazNrIjouW6fmdJLxc=
|
||||
github.com/urfave/cli/v2 v2.25.3 h1:VJkt6wvEBOoSjPFQvOkv6iWIrsJyCrKGtCtxXWwmGeY=
|
||||
github.com/urfave/cli/v2 v2.25.3/go.mod h1:GHupkWPMM0M/sj1a2b4wUrWBPzazNrIjouW6fmdJLxc=
|
||||
github.com/urfave/cli/v2 v2.25.7 h1:VAzn5oq403l5pHjc4OhD54+XGO9cdKVL/7lDjF+iKUs=
|
||||
github.com/urfave/cli/v2 v2.25.7/go.mod h1:8qnjx1vcq5s2/wpsqoZFndg2CE5tNFyrTvS6SinrnYQ=
|
||||
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673 h1:bAn7/zixMGCfxrRTfdpNzjtPYqr8smhKouy9mxVdGPU=
|
||||
github.com/xrash/smetrics v0.0.0-20201216005158-039620a65673/go.mod h1:N3UwUGtsrSj3ccvlPHLoLsHnpR27oXr4ZE984MbSER8=
|
||||
golang.org/x/crypto v0.5.0 h1:U/0M97KRkSFvyD/3FSmdP5W5swImpNgle/EHFhOsQPE=
|
||||
golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU=
|
||||
golang.org/x/crypto v0.8.0 h1:pd9TJtTueMTVQXzk8E2XESSMQDj/U7OUu0PqJqPXQjQ=
|
||||
golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE=
|
||||
golang.org/x/crypto v0.13.0 h1:mvySKfSWJ+UKUii46M40LOvyWfN0s2U+46/jDd0e6Ck=
|
||||
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.4.0 h1:Zr2JFtRQNX3BCZ8YtxRE9hNJYC8J6I1MVbMg6owUp18=
|
||||
golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.8.0 h1:EBmGv8NaZBZTWvrbjNoL6HVt+IVy3QDQpJs7VRIw3tU=
|
||||
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.12.0 h1:CM0HF96J0hcLAwsHPJZjfdNzs0gftsLfgKt57wWHJ0o=
|
||||
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
maze.io/x/crypto v0.0.0-20190131090603-9b94c9afe066 h1:UrD21H1Ue5Nl8f2x/NQJBRdc49YGmla3mRStinH8CCE=
|
||||
maze.io/x/crypto v0.0.0-20190131090603-9b94c9afe066/go.mod h1:DEvumi+swYmlKxSlnsvPwS15tRjoypCCeJFXswU5FfQ=
|
206
endgamefiles/sourcecode/gobalance/main.go
Normal file
206
endgamefiles/sourcecode/gobalance/main.go
Normal file
@ -0,0 +1,206 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
_ "crypto/sha256"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/urfave/cli/v2"
|
||||
"gobalance/pkg/brand"
|
||||
"gobalance/pkg/onionbalance"
|
||||
"gobalance/pkg/stem/descriptor"
|
||||
_ "golang.org/x/crypto/sha3"
|
||||
"gopkg.in/yaml.v3"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// https://onionbalance.readthedocs.io
|
||||
// https://github.com/torproject/torspec/blob/main/control-spec.txt
|
||||
// https://github.com/torproject/torspec/blob/main/rend-spec-v3.txt
|
||||
|
||||
var appVersion = "1.0.0"
|
||||
|
||||
func main() {
|
||||
logrus.SetLevel(logrus.DebugLevel)
|
||||
|
||||
customFormatter := new(logrus.TextFormatter)
|
||||
customFormatter.TimestampFormat = "2006-01-02 15:04:05"
|
||||
logrus.SetFormatter(customFormatter)
|
||||
customFormatter.FullTimestamp = true
|
||||
|
||||
app := &cli.App{
|
||||
Name: "gobalance",
|
||||
Usage: "Golang rewrite of onionbalance",
|
||||
Authors: []*cli.Author{{Name: "n0tr1v", Email: "n0tr1v@protonmail.com"}, {Name: "Paris", Email: "amazingsights@inter.net(Not Real)"}},
|
||||
Version: appVersion,
|
||||
Flags: []cli.Flag{
|
||||
&cli.StringFlag{
|
||||
Name: "ip",
|
||||
Aliases: []string{"i"},
|
||||
Usage: "Tor control IP address",
|
||||
Value: "127.0.0.1",
|
||||
},
|
||||
&cli.IntFlag{
|
||||
Name: "port",
|
||||
Aliases: []string{"p"},
|
||||
Usage: "Tor control port",
|
||||
Value: 9051,
|
||||
},
|
||||
&cli.StringFlag{
|
||||
Name: "torPassword",
|
||||
Aliases: []string{"tor-password"},
|
||||
Usage: "Tor control password",
|
||||
},
|
||||
&cli.StringFlag{
|
||||
Name: "config",
|
||||
Aliases: []string{"c"},
|
||||
Usage: "Config file location",
|
||||
Value: "config.yaml",
|
||||
},
|
||||
&cli.BoolFlag{
|
||||
Name: "quick",
|
||||
Aliases: []string{"q"},
|
||||
Usage: "Quickly publish a new descriptor (for HSDIR descriptor failures/tests)",
|
||||
},
|
||||
&cli.BoolFlag{
|
||||
Name: "adaptive",
|
||||
Aliases: []string{"a"},
|
||||
Usage: "Adaptive publishing changes the way descriptors are published to prioritize descriptor rotation on the HSDIR. " +
|
||||
"A counter to introduction cell attacks (with enough scale) and a more private version of introduction spamming. The default is true.",
|
||||
Value: true,
|
||||
},
|
||||
&cli.BoolFlag{
|
||||
Name: "tight",
|
||||
Aliases: []string{"t"},
|
||||
Usage: "Use tight adaptive descriptor timings. This is effectively a safe version of introduction spamming. " +
|
||||
"Most useful in the case of DDOS. Strains and potentially crashes the Tor process. The default is false.",
|
||||
Value: false,
|
||||
},
|
||||
&cli.BoolFlag{
|
||||
Name: "strict",
|
||||
Aliases: []string{"s"},
|
||||
Usage: "Strictly adhere to adaptive algorithms and, at the start, panic if non-optimal conditions are found. The default is false.",
|
||||
Value: false,
|
||||
},
|
||||
&cli.StringFlag{
|
||||
Name: "dirsplit",
|
||||
Aliases: []string{"ds"},
|
||||
Usage: "'Responsible HSDIR split' splits the descriptor submission to the network." +
|
||||
"Allowing for multiple gobalance processes to work as a single noncompetitive unit. " +
|
||||
"This allows for more flexible scaling on fronts as many Tor processes can be safely used. " +
|
||||
"Valid values are ranges (like 1-2 or 3-8). Cover all ranges from 1-8 on all processes! The default is 1-8.",
|
||||
Value: "1-8",
|
||||
},
|
||||
&cli.StringFlag{
|
||||
Name: "verbosity",
|
||||
Aliases: []string{"vv"},
|
||||
Usage: "Minimum verbosity level for logging. Available in ascending order: debug, info, warning, error, critical). The default is info.",
|
||||
Value: "info",
|
||||
},
|
||||
},
|
||||
Action: mainAction,
|
||||
Commands: []*cli.Command{
|
||||
{
|
||||
Name: "generate-config",
|
||||
Aliases: []string{"g"},
|
||||
Usage: "generate a config.yaml file",
|
||||
Action: generateConfigAction,
|
||||
},
|
||||
},
|
||||
}
|
||||
err := app.Run(os.Args)
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
func mainAction(c *cli.Context) error {
|
||||
verbosity := c.String("verbosity")
|
||||
|
||||
logLvl := logrus.InfoLevel
|
||||
switch verbosity {
|
||||
case "debug":
|
||||
logLvl = logrus.DebugLevel
|
||||
case "info":
|
||||
logLvl = logrus.InfoLevel
|
||||
case "warning":
|
||||
logLvl = logrus.WarnLevel
|
||||
case "error":
|
||||
logLvl = logrus.ErrorLevel
|
||||
case "critical":
|
||||
logLvl = logrus.FatalLevel
|
||||
default:
|
||||
panic("Invalid 'verbosity' value. Valid values are: debug, info, warning, error, critical.")
|
||||
}
|
||||
logrus.SetLevel(logLvl)
|
||||
|
||||
logrus.Warningf("Initializing gobalance (version: %s)...", appVersion)
|
||||
onionbalance.Main(c)
|
||||
select {}
|
||||
}
|
||||
|
||||
func fileExists(filePath string) bool {
|
||||
if _, err := os.Stat(filePath); errors.Is(err, os.ErrNotExist) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func generateConfigAction(*cli.Context) error {
|
||||
/*
|
||||
Enter path to store generated config
|
||||
Number of services (frontends) to create (default: 1):
|
||||
Enter path to master service private key (i.e. path to 'hs_ed25519_secret_key') (Leave empty to generate a key)
|
||||
Number of instance services to create (default: 2) (min: 1, max: 8)
|
||||
Provide a tag name to group these instances [node]
|
||||
|
||||
Done! Successfully generated OnionBalance config
|
||||
Now please edit 'config/config.yaml' with a text editor to add/remove/edit your backend instances
|
||||
*/
|
||||
configFilePath, _ := filepath.Abs("./config.yaml")
|
||||
if fileExists(configFilePath) {
|
||||
logrus.Fatalf("config file %s already exists", configFilePath)
|
||||
}
|
||||
|
||||
masterPublicKey, masterPrivateKey, _ := ed25519.GenerateKey(brand.Reader())
|
||||
masterPrivateKeyDer, _ := x509.MarshalPKCS8PrivateKey(masterPrivateKey)
|
||||
block := &pem.Block{Type: "PRIVATE KEY", Bytes: masterPrivateKeyDer}
|
||||
onionAddress := descriptor.AddressFromIdentityKey(masterPublicKey)
|
||||
masterKeyFileName := strings.TrimSuffix(onionAddress, ".onion") + ".key"
|
||||
masterKeyFile, err := os.Create(masterKeyFileName)
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
defer func(masterKeyFile *os.File) {
|
||||
err := masterKeyFile.Close()
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
}(masterKeyFile)
|
||||
_ = pem.Encode(masterKeyFile, block)
|
||||
|
||||
configFile, err := os.Create(configFilePath)
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
defer func(configFile *os.File) {
|
||||
err := configFile.Close()
|
||||
if err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
}(configFile)
|
||||
data := onionbalance.ConfigData{
|
||||
Services: []onionbalance.ServiceConfig{{
|
||||
Key: masterKeyFileName,
|
||||
Instances: []onionbalance.InstanceConfig{{Address: "<Enter the instance onion address here>"}},
|
||||
}},
|
||||
}
|
||||
if err := yaml.NewEncoder(configFile).Encode(data); err != nil {
|
||||
logrus.Fatal(err)
|
||||
}
|
||||
return nil
|
||||
}
|
60
endgamefiles/sourcecode/gobalance/pkg/brand/brand.go
Normal file
60
endgamefiles/sourcecode/gobalance/pkg/brand/brand.go
Normal file
@ -0,0 +1,60 @@
|
||||
package brand
|
||||
|
||||
import (
|
||||
cryptoRand "crypto/rand"
|
||||
"io"
|
||||
mathRand "math/rand"
|
||||
"time"
|
||||
)
|
||||
|
||||
// brand - balance-random - is a hack, so we can have deterministic random if needed
|
||||
|
||||
const needDeterministic = false
|
||||
|
||||
// https://github.com/dustin/randbo
|
||||
type randbo struct {
|
||||
mathRand.Source
|
||||
}
|
||||
|
||||
var deterministicReader = New()
|
||||
|
||||
func New() io.Reader {
|
||||
return NewFrom(mathRand.NewSource(time.Now().UnixNano()))
|
||||
}
|
||||
|
||||
// NewFrom creates a new reader from your own rand.Source
|
||||
func NewFrom(src mathRand.Source) io.Reader {
|
||||
return &randbo{src}
|
||||
}
|
||||
|
||||
// Read satisfies io.Reader
|
||||
func (r *randbo) Read(p []byte) (n int, err error) {
|
||||
todo := len(p)
|
||||
offset := 0
|
||||
for {
|
||||
val := int64(r.Int63())
|
||||
for i := 0; i < 8; i++ {
|
||||
p[offset] = byte(val)
|
||||
todo--
|
||||
if todo == 0 {
|
||||
return len(p), nil
|
||||
}
|
||||
offset++
|
||||
val >>= 8
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func Read(b []byte) (n int, err error) {
|
||||
if needDeterministic {
|
||||
return deterministicReader.Read(b)
|
||||
}
|
||||
return cryptoRand.Read(b)
|
||||
}
|
||||
|
||||
func Reader() io.Reader {
|
||||
if needDeterministic {
|
||||
return deterministicReader
|
||||
}
|
||||
return cryptoRand.Reader
|
||||
}
|
5
endgamefiles/sourcecode/gobalance/pkg/btime/btime.go
Normal file
5
endgamefiles/sourcecode/gobalance/pkg/btime/btime.go
Normal file
@ -0,0 +1,5 @@
|
||||
package btime
|
||||
|
||||
import "gobalance/pkg/clockwork"
|
||||
|
||||
var Clock = clockwork.NewRealClock()
|
319
endgamefiles/sourcecode/gobalance/pkg/clockwork/clockwork.go
Normal file
319
endgamefiles/sourcecode/gobalance/pkg/clockwork/clockwork.go
Normal file
@ -0,0 +1,319 @@
|
||||
package clockwork
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Clock provides an interface that packages can use instead of directly
|
||||
// using the time module, so that chronology-related behavior can be tested
|
||||
type Clock interface {
|
||||
After(d time.Duration) <-chan time.Time
|
||||
Sleep(d time.Duration)
|
||||
Now() time.Time
|
||||
Since(t time.Time) time.Duration
|
||||
Until(t time.Time) time.Duration
|
||||
NewTicker(d time.Duration) Ticker
|
||||
NewTimer(d time.Duration) Timer
|
||||
AfterFunc(d time.Duration, f func()) Timer
|
||||
Location() *time.Location
|
||||
}
|
||||
|
||||
// Timer provides an interface to a time.Timer which is testable.
|
||||
// See https://golang.org/pkg/time/#Timer for more details on how timers work.
|
||||
type Timer interface {
|
||||
C() <-chan time.Time
|
||||
Reset(d time.Duration) bool
|
||||
Stop() bool
|
||||
T() *time.Timer // underlying *time.Timer (nil when using a FakeClock)
|
||||
}
|
||||
|
||||
func (rc *realClock) NewTimer(d time.Duration) Timer {
|
||||
return &realTimer{time.NewTimer(d)}
|
||||
}
|
||||
func (rc *realClock) AfterFunc(d time.Duration, f func()) Timer {
|
||||
return &realTimer{time.AfterFunc(d, f)}
|
||||
}
|
||||
|
||||
type realTimer struct {
|
||||
t *time.Timer
|
||||
}
|
||||
|
||||
func (rt *realTimer) C() <-chan time.Time { return rt.t.C }
|
||||
func (rt *realTimer) T() *time.Timer { return rt.t }
|
||||
func (rt *realTimer) Reset(d time.Duration) bool {
|
||||
return rt.t.Reset(d)
|
||||
}
|
||||
func (rt *realTimer) Stop() bool {
|
||||
return rt.t.Stop()
|
||||
}
|
||||
|
||||
// FakeClock provides an interface for a clock which can be
|
||||
// manually advanced through time
|
||||
type FakeClock interface {
|
||||
Clock
|
||||
// Advance advances the FakeClock to a new point in time, ensuring any existing
|
||||
// sleepers are notified appropriately before returning
|
||||
Advance(d time.Duration)
|
||||
// BlockUntil will block until the FakeClock has the given number of
|
||||
// sleepers (callers of Sleep or After)
|
||||
BlockUntil(n int)
|
||||
}
|
||||
|
||||
// NewRealClock returns a Clock which simply delegates calls to the actual time
|
||||
// package; it should be used by packages in production.
|
||||
func NewRealClock() Clock {
|
||||
return &realClock{}
|
||||
}
|
||||
|
||||
// NewRealClockInLocation ...
|
||||
func NewRealClockInLocation(location *time.Location) Clock {
|
||||
return &realClock{loc: location}
|
||||
}
|
||||
|
||||
// NewFakeClock returns a FakeClock implementation which can be
|
||||
// manually advanced through time for testing. The initial time of the
|
||||
// FakeClock will be an arbitrary non-zero time.
|
||||
func NewFakeClock() FakeClock {
|
||||
// use a fixture that does not fulfill Time.IsZero()
|
||||
return NewFakeClockAt(time.Date(1984, time.April, 4, 0, 0, 0, 0, time.UTC))
|
||||
}
|
||||
|
||||
// NewFakeClockAt returns a FakeClock initialised at the given time.Time.
|
||||
func NewFakeClockAt(t time.Time) FakeClock {
|
||||
return &fakeClock{
|
||||
time: t,
|
||||
}
|
||||
}
|
||||
|
||||
type realClock struct {
|
||||
loc *time.Location
|
||||
}
|
||||
|
||||
func (rc *realClock) Location() *time.Location {
|
||||
return time.Now().Location()
|
||||
}
|
||||
|
||||
func (rc *realClock) After(d time.Duration) <-chan time.Time {
|
||||
return time.After(d)
|
||||
}
|
||||
|
||||
func (rc *realClock) Sleep(d time.Duration) {
|
||||
time.Sleep(d)
|
||||
}
|
||||
|
||||
func (rc *realClock) Now() time.Time {
|
||||
if rc.loc != nil {
|
||||
return time.Now().In(rc.loc)
|
||||
}
|
||||
return time.Now()
|
||||
}
|
||||
|
||||
func (rc *realClock) Since(t time.Time) time.Duration {
|
||||
return rc.Now().Sub(t)
|
||||
}
|
||||
|
||||
func (rc *realClock) Until(t time.Time) time.Duration {
|
||||
return t.Sub(rc.Now())
|
||||
}
|
||||
|
||||
func (rc *realClock) NewTicker(d time.Duration) Ticker {
|
||||
return &realTicker{time.NewTicker(d)}
|
||||
}
|
||||
|
||||
type fakeClock struct {
|
||||
sleepers []*sleeper
|
||||
blockers []*blocker
|
||||
time time.Time
|
||||
|
||||
l sync.RWMutex
|
||||
}
|
||||
|
||||
// sleeper represents a caller of After or Sleep
|
||||
// sleeper represents a waiting timer from NewTimer, Sleep, After, etc.
|
||||
type sleeper struct {
|
||||
until time.Time
|
||||
done uint32
|
||||
callback func(interface{}, time.Time)
|
||||
arg interface{}
|
||||
ch chan time.Time
|
||||
fc *fakeClock // needed for Reset()
|
||||
}
|
||||
|
||||
func (s *sleeper) awaken(now time.Time) {
|
||||
if atomic.CompareAndSwapUint32(&s.done, 0, 1) {
|
||||
s.callback(s.arg, now)
|
||||
}
|
||||
}
|
||||
func (s *sleeper) C() <-chan time.Time { return s.ch }
|
||||
func (s *sleeper) T() *time.Timer { return nil }
|
||||
func (s *sleeper) Reset(d time.Duration) bool {
|
||||
active := s.Stop()
|
||||
s.until = s.fc.Now().Add(d)
|
||||
defer s.fc.addTimer(s)
|
||||
defer atomic.StoreUint32(&s.done, 0)
|
||||
return active
|
||||
}
|
||||
func (s *sleeper) Stop() bool {
|
||||
stopped := atomic.CompareAndSwapUint32(&s.done, 0, 1)
|
||||
if stopped {
|
||||
// Expire the timer and notify blockers
|
||||
s.until = s.fc.Now()
|
||||
s.fc.Advance(0)
|
||||
}
|
||||
return stopped
|
||||
}
|
||||
|
||||
// blocker represents a caller of BlockUntil
|
||||
type blocker struct {
|
||||
count int
|
||||
ch chan struct{}
|
||||
}
|
||||
|
||||
// After mimics time.After; it waits for the given duration to elapse on the
|
||||
// fakeClock, then sends the current time on the returned channel.
|
||||
func (fc *fakeClock) After(d time.Duration) <-chan time.Time {
|
||||
return fc.NewTimer(d).C()
|
||||
}
|
||||
|
||||
// NewTimer creates a new Timer that will send the current time on its channel
|
||||
// after the given duration elapses on the fake clock.
|
||||
func (fc *fakeClock) NewTimer(d time.Duration) Timer {
|
||||
sendTime := func(c interface{}, now time.Time) {
|
||||
c.(chan time.Time) <- now
|
||||
}
|
||||
done := make(chan time.Time, 1)
|
||||
s := &sleeper{
|
||||
fc: fc,
|
||||
until: fc.time.Add(d),
|
||||
callback: sendTime,
|
||||
arg: done,
|
||||
ch: done,
|
||||
}
|
||||
fc.addTimer(s)
|
||||
return s
|
||||
}
|
||||
|
||||
// AfterFunc waits for the duration to elapse on the fake clock and then calls f
|
||||
// in its own goroutine.
|
||||
// It returns a Timer that can be used to cancel the call using its Stop method.
|
||||
func (fc *fakeClock) AfterFunc(d time.Duration, f func()) Timer {
|
||||
goFunc := func(fn interface{}, _ time.Time) {
|
||||
go fn.(func())()
|
||||
}
|
||||
s := &sleeper{
|
||||
fc: fc,
|
||||
until: fc.time.Add(d),
|
||||
callback: goFunc,
|
||||
arg: f,
|
||||
// zero-valued ch, the same as it is in the `time` pkg
|
||||
}
|
||||
fc.addTimer(s)
|
||||
return s
|
||||
}
|
||||
|
||||
func (fc *fakeClock) addTimer(s *sleeper) {
|
||||
fc.l.Lock()
|
||||
defer fc.l.Unlock()
|
||||
now := fc.time
|
||||
if now.Sub(s.until) >= 0 {
|
||||
// special case - trigger immediately
|
||||
s.awaken(now)
|
||||
} else {
|
||||
// otherwise, add to the set of sleepers
|
||||
fc.sleepers = append(fc.sleepers, s)
|
||||
// and notify any blockers
|
||||
fc.blockers = notifyBlockers(fc.blockers, len(fc.sleepers))
|
||||
}
|
||||
}
|
||||
|
||||
// notifyBlockers notifies all the blockers waiting until the
|
||||
// given number of sleepers are waiting on the fakeClock. It
|
||||
// returns an updated slice of blockers (i.e. those still waiting)
|
||||
func notifyBlockers(blockers []*blocker, count int) (newBlockers []*blocker) {
|
||||
for _, b := range blockers {
|
||||
if b.count == count {
|
||||
close(b.ch)
|
||||
} else {
|
||||
newBlockers = append(newBlockers, b)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Sleep blocks until the given duration has passed on the fakeClock
|
||||
func (fc *fakeClock) Sleep(d time.Duration) {
|
||||
<-fc.After(d)
|
||||
}
|
||||
|
||||
// Time returns the current time of the fakeClock
|
||||
func (fc *fakeClock) Now() time.Time {
|
||||
fc.l.RLock()
|
||||
t := fc.time
|
||||
fc.l.RUnlock()
|
||||
return t
|
||||
}
|
||||
|
||||
// Since returns the duration that has passed since the given time on the fakeClock
|
||||
func (fc *fakeClock) Since(t time.Time) time.Duration {
|
||||
return fc.Now().Sub(t)
|
||||
}
|
||||
|
||||
// Until returns the duration until the given time on the fakeClock
|
||||
func (fc *fakeClock) Until(t time.Time) time.Duration {
|
||||
return t.Sub(fc.Now())
|
||||
}
|
||||
|
||||
func (fc *fakeClock) Location() *time.Location {
|
||||
return fc.time.Location()
|
||||
}
|
||||
|
||||
func (fc *fakeClock) NewTicker(d time.Duration) Ticker {
|
||||
ft := &fakeTicker{
|
||||
c: make(chan time.Time, 1),
|
||||
stop: make(chan bool, 1),
|
||||
clock: fc,
|
||||
period: d,
|
||||
}
|
||||
go ft.tick()
|
||||
return ft
|
||||
}
|
||||
|
||||
// Advance advances fakeClock to a new point in time, ensuring channels from any
|
||||
// previous invocations of After are notified appropriately before returning
|
||||
func (fc *fakeClock) Advance(d time.Duration) {
|
||||
fc.l.Lock()
|
||||
defer fc.l.Unlock()
|
||||
end := fc.time.Add(d)
|
||||
var newSleepers []*sleeper
|
||||
for _, s := range fc.sleepers {
|
||||
if end.Sub(s.until) >= 0 {
|
||||
s.awaken(end)
|
||||
} else {
|
||||
newSleepers = append(newSleepers, s)
|
||||
}
|
||||
}
|
||||
fc.sleepers = newSleepers
|
||||
fc.blockers = notifyBlockers(fc.blockers, len(fc.sleepers))
|
||||
fc.time = end
|
||||
}
|
||||
|
||||
// BlockUntil will block until the fakeClock has the given number of sleepers
|
||||
// (callers of Sleep or After)
|
||||
func (fc *fakeClock) BlockUntil(n int) {
|
||||
fc.l.Lock()
|
||||
// Fast path: current number of sleepers is what we're looking for
|
||||
if len(fc.sleepers) == n {
|
||||
fc.l.Unlock()
|
||||
return
|
||||
}
|
||||
// Otherwise, set up a new blocker
|
||||
b := &blocker{
|
||||
count: n,
|
||||
ch: make(chan struct{}),
|
||||
}
|
||||
fc.blockers = append(fc.blockers, b)
|
||||
fc.l.Unlock()
|
||||
<-b.ch
|
||||
}
|
66
endgamefiles/sourcecode/gobalance/pkg/clockwork/ticker.go
Normal file
66
endgamefiles/sourcecode/gobalance/pkg/clockwork/ticker.go
Normal file
@ -0,0 +1,66 @@
|
||||
package clockwork
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
// Ticker provides an interface which can be used instead of directly
|
||||
// using the ticker within the time module. The real-time ticker t
|
||||
// provides ticks through t.C which becomes now t.Chan() to make
|
||||
// this channel requirement definable in this interface.
|
||||
type Ticker interface {
|
||||
Chan() <-chan time.Time
|
||||
Stop()
|
||||
}
|
||||
|
||||
type realTicker struct{ *time.Ticker }
|
||||
|
||||
func (rt *realTicker) Chan() <-chan time.Time {
|
||||
return rt.C
|
||||
}
|
||||
|
||||
type fakeTicker struct {
|
||||
c chan time.Time
|
||||
stop chan bool
|
||||
clock FakeClock
|
||||
period time.Duration
|
||||
}
|
||||
|
||||
func (ft *fakeTicker) Chan() <-chan time.Time {
|
||||
return ft.c
|
||||
}
|
||||
|
||||
func (ft *fakeTicker) Stop() {
|
||||
ft.stop <- true
|
||||
}
|
||||
|
||||
// tick sends the tick time to the ticker channel after every period.
|
||||
// Tick events are discarded if the underlying ticker channel does
|
||||
// not have enough capacity.
|
||||
func (ft *fakeTicker) tick() {
|
||||
tick := ft.clock.Now()
|
||||
for {
|
||||
tick = tick.Add(ft.period)
|
||||
remaining := tick.Sub(ft.clock.Now())
|
||||
if remaining <= 0 {
|
||||
// The tick should have already happened. This can happen when
|
||||
// Advance() is called on the fake clock with a duration larger
|
||||
// than this ticker's period.
|
||||
select {
|
||||
case ft.c <- tick:
|
||||
default:
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ft.stop:
|
||||
return
|
||||
case <-ft.clock.After(remaining):
|
||||
select {
|
||||
case ft.c <- tick:
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
40
endgamefiles/sourcecode/gobalance/pkg/gobpk/gobpk.go
Normal file
40
endgamefiles/sourcecode/gobalance/pkg/gobpk/gobpk.go
Normal file
@ -0,0 +1,40 @@
|
||||
package gobpk
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"gobalance/pkg/onionbalance/hs_v3/ext"
|
||||
)
|
||||
|
||||
// gobpk == gobalance private key
|
||||
|
||||
// PrivateKey wrapper around ed25519 private key to handle both tor format or normal
|
||||
type PrivateKey struct {
|
||||
isPrivKeyInTorFormat bool
|
||||
privateKey ed25519.PrivateKey
|
||||
}
|
||||
|
||||
// Public returns the public key bytes
|
||||
func (k PrivateKey) Public() ed25519.PublicKey {
|
||||
if k.isPrivKeyInTorFormat {
|
||||
return ext.PublickeyFromESK(k.privateKey)
|
||||
}
|
||||
return k.privateKey.Public().(ed25519.PublicKey)
|
||||
}
|
||||
|
||||
// Seed returns the underlying ed25519 private key seed
|
||||
func (k PrivateKey) Seed() []byte {
|
||||
return k.privateKey.Seed()
|
||||
}
|
||||
|
||||
// IsPrivKeyInTorFormat returns either or not the private key is in tor format
|
||||
func (k PrivateKey) IsPrivKeyInTorFormat() bool {
|
||||
return k.isPrivKeyInTorFormat
|
||||
}
|
||||
|
||||
// New created a new PrivateKey
|
||||
func New(privateKey ed25519.PrivateKey, isPrivKeyInTorFormat bool) PrivateKey {
|
||||
return PrivateKey{
|
||||
privateKey: privateKey,
|
||||
isPrivKeyInTorFormat: isPrivKeyInTorFormat,
|
||||
}
|
||||
}
|
21
endgamefiles/sourcecode/gobalance/pkg/onionbalance/config.go
Normal file
21
endgamefiles/sourcecode/gobalance/pkg/onionbalance/config.go
Normal file
@ -0,0 +1,21 @@
|
||||
package onionbalance
|
||||
|
||||
import "encoding/json"
|
||||
|
||||
type InstanceConfig struct {
|
||||
Address string
|
||||
}
|
||||
|
||||
type ServiceConfig struct {
|
||||
Key string
|
||||
Instances []InstanceConfig
|
||||
}
|
||||
|
||||
type ConfigData struct {
|
||||
Services []ServiceConfig
|
||||
}
|
||||
|
||||
func (c ConfigData) String() string {
|
||||
by, _ := json.Marshal(c)
|
||||
return string(by)
|
||||
}
|
584
endgamefiles/sourcecode/gobalance/pkg/onionbalance/consensus.go
Normal file
584
endgamefiles/sourcecode/gobalance/pkg/onionbalance/consensus.go
Normal file
@ -0,0 +1,584 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/sirupsen/logrus"
|
||||
"gobalance/pkg/btime"
|
||||
"io"
|
||||
"net"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Consensus struct {
|
||||
Nodes []*TorNode
|
||||
nodeMtx sync.RWMutex
|
||||
consensus *ConsensusDoc
|
||||
controller *Controller
|
||||
}
|
||||
|
||||
func NewConsensus(controller *Controller, doRefreshConsensus bool) *Consensus {
|
||||
c := &Consensus{}
|
||||
c.controller = controller
|
||||
// A list of tor_node:Node objects contained in the current consensus
|
||||
c.SetNodes(nil)
|
||||
// A stem NetworkStatusDocumentV3 object representing the current consensus
|
||||
c.consensus = nil
|
||||
if !doRefreshConsensus {
|
||||
return c
|
||||
}
|
||||
c.refresh()
|
||||
return c
|
||||
}
|
||||
|
||||
func (c *Consensus) GetNodes() []*TorNode {
|
||||
c.nodeMtx.RLock()
|
||||
defer c.nodeMtx.RUnlock()
|
||||
return c.Nodes
|
||||
}
|
||||
|
||||
func (c *Consensus) SetNodes(newNodes []*TorNode) {
|
||||
c.nodeMtx.Lock()
|
||||
defer c.nodeMtx.Unlock()
|
||||
c.Nodes = newNodes
|
||||
}
|
||||
|
||||
func (c *Consensus) Consensus() *ConsensusDoc {
|
||||
return c.consensus
|
||||
}
|
||||
|
||||
// Attempt to refresh the consensus with the latest one available.
|
||||
func (c *Consensus) refresh() {
|
||||
mdConsensusStr, err := c.controller.GetMdConsensus()
|
||||
if err != nil {
|
||||
logrus.Errorf("Failed to GetMdConsensus: %v", err)
|
||||
return
|
||||
}
|
||||
c.consensus, err = NetworkStatusDocumentV3(mdConsensusStr)
|
||||
if err != nil {
|
||||
logrus.Warn("No valid consensus received. Waiting for one...")
|
||||
return
|
||||
}
|
||||
if !c.IsLive() {
|
||||
logrus.Info("Loaded consensus is not live. Waiting for a live one.")
|
||||
return
|
||||
}
|
||||
c.SetNodes(c.initializeNodes())
|
||||
}
|
||||
|
||||
// IsLive return True if the consensus is live.
|
||||
// This function replicates the behavior of the little-t-tor
|
||||
// networkstatus_get_reasonably_live_consensus() function.
|
||||
func (c *Consensus) IsLive() bool {
|
||||
if c.consensus == nil {
|
||||
return false
|
||||
}
|
||||
reasonablyLiveTime := 24 * 60 * 60 * time.Second
|
||||
now := btime.Clock.Now().UTC()
|
||||
isLive := now.After(c.consensus.ValidAfter.Add(-reasonablyLiveTime)) &&
|
||||
now.Before(c.consensus.ValidUntil.Add(reasonablyLiveTime))
|
||||
return isLive
|
||||
}
|
||||
|
||||
func (c *Consensus) initializeNodes() []*TorNode {
|
||||
nodes := make([]*TorNode, 0)
|
||||
microdescriptorsList, err := c.controller.GetMicrodescriptors()
|
||||
if err != nil {
|
||||
logrus.Warn("Can't get microdescriptors from Tor. Delaying...")
|
||||
return nodes
|
||||
}
|
||||
// Turn the mds into a dictionary indexed by the digest as an
|
||||
// optimization while matching them with routerstatuses.
|
||||
microdescriptorsDict := make(map[string]MicroDescriptor)
|
||||
for _, md := range microdescriptorsList {
|
||||
microdescriptorsDict[md.Digest()] = md
|
||||
}
|
||||
|
||||
// Go through the routerstatuses and match them up with
|
||||
// microdescriptors, and create a Node object for each match. If there
|
||||
// is no match we don't register it as a node.
|
||||
for _, relayRouterStatusFn := range c.getRouterStatuses() {
|
||||
relayRouterStatus := relayRouterStatusFn()
|
||||
logrus.Debugf("Checking routerstatus with md digest %s", relayRouterStatus.Digest)
|
||||
nodeMicrodescriptor, found := microdescriptorsDict[relayRouterStatus.Digest]
|
||||
if !found {
|
||||
logrus.Debugf("Could not find microdesc for rs with fpr %s", relayRouterStatus.Fingerprint)
|
||||
continue
|
||||
}
|
||||
node := NewNode(nodeMicrodescriptor, relayRouterStatus)
|
||||
nodes = append(nodes, node)
|
||||
}
|
||||
return nodes
|
||||
}
|
||||
|
||||
func (c *Consensus) getRouterStatuses() map[Fingerprint]GetStatus {
|
||||
if !c.IsLive() {
|
||||
panic("getRouterStatuses and not live")
|
||||
}
|
||||
return c.consensus.Routers
|
||||
}
|
||||
|
||||
// NetworkStatusDocumentV3 parse a v3 network status document
|
||||
func NetworkStatusDocumentV3(mdConsensusStr string) (*ConsensusDoc, error) {
|
||||
//fmt.Println(mdConsensusStr)
|
||||
cd := &ConsensusDoc{}
|
||||
|
||||
var consensus = NewConsensus1()
|
||||
|
||||
var statusParser func(string) (Fingerprint, GetStatus, error)
|
||||
statusParser = ParseRawStatus
|
||||
|
||||
lines1 := strings.Split(mdConsensusStr, "\n")
|
||||
if len(lines1) < 2 {
|
||||
// TODO: the following line SOMETIMES returns "panic: runtime error: slice bounds out of range [2:1]" when new consensus is in, not sure why.
|
||||
logrus.Panic(mdConsensusStr)
|
||||
}
|
||||
br := bufio.NewReader(strings.NewReader(strings.Join(lines1[2:], "\n")))
|
||||
err := extractMetaInfo(br, consensus)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("metadata info extraction failed: %w", err)
|
||||
}
|
||||
queue := make(chan QueueUnit)
|
||||
go DissectFile(br, extractStatusEntry, queue)
|
||||
|
||||
// Parse incoming router statuses until the channel is closed by the remote
|
||||
// end.
|
||||
for unit := range queue {
|
||||
if unit.Err != nil {
|
||||
return nil, unit.Err
|
||||
}
|
||||
|
||||
fingerprint, getStatus, err := statusParser(unit.Blurb)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
consensus.Routers[SanitiseFingerprint(fingerprint)] = getStatus
|
||||
}
|
||||
|
||||
lines := strings.Split(mdConsensusStr, "\n")
|
||||
for _, line := range lines {
|
||||
if strings.HasPrefix(line, "valid-after ") {
|
||||
validAfter := strings.TrimPrefix(line, "valid-after ")
|
||||
cd.ValidAfter, _ = time.Parse("2006-01-02 15:04:05", validAfter)
|
||||
} else if strings.HasPrefix(line, "valid-until ") {
|
||||
validUntil := strings.TrimPrefix(line, "valid-until ")
|
||||
cd.ValidUntil, _ = time.Parse("2006-01-02 15:04:05", validUntil)
|
||||
}
|
||||
}
|
||||
|
||||
return consensus, nil
|
||||
}
|
||||
|
||||
// NewConsensus serves as a constructor and returns a pointer to a freshly
|
||||
// allocated and empty Consensus.
|
||||
func NewConsensus1() *ConsensusDoc {
|
||||
return &ConsensusDoc{Routers: make(map[Fingerprint]GetStatus)}
|
||||
}
|
||||
|
||||
// ParseRawStatus parses a raw router status (in string format) and returns the
|
||||
// router's fingerprint, a function which returns a RouterStatus, and an error
|
||||
// if there were any during parsing.
|
||||
func ParseRawStatus(rawStatus string) (Fingerprint, GetStatus, error) {
|
||||
|
||||
var status = new(RouterStatus)
|
||||
|
||||
lines := strings.Split(rawStatus, "\n")
|
||||
|
||||
// Go over raw statuses line by line and extract the fields we are
|
||||
// interested in.
|
||||
for _, line := range lines {
|
||||
|
||||
words := strings.Split(line, " ")
|
||||
|
||||
switch words[0] {
|
||||
|
||||
case "r":
|
||||
status.Nickname = words[1]
|
||||
fingerprint, err := Base64ToString(words[2])
|
||||
if err != nil {
|
||||
return "", nil, err
|
||||
}
|
||||
status.Fingerprint = SanitiseFingerprint(Fingerprint(fingerprint))
|
||||
|
||||
publish, _ := time.Parse(publishedTimeLayout, strings.Join(words[3:5], " "))
|
||||
status.Publication = publish
|
||||
status.Address.IPv4Address = net.ParseIP(words[5])
|
||||
status.Address.IPv4ORPort = StringToPort(words[6])
|
||||
status.Address.IPv4DirPort = StringToPort(words[7])
|
||||
|
||||
case "a":
|
||||
status.Address.IPv6Address, status.Address.IPv6ORPort = parseIPv6AddressAndPort(words[1])
|
||||
|
||||
case "m":
|
||||
status.Digest = words[1]
|
||||
|
||||
case "s":
|
||||
status.Flags = *parseRouterFlags(words[1:])
|
||||
|
||||
case "v":
|
||||
status.TorVersion = words[2]
|
||||
|
||||
case "w":
|
||||
bwExpr := words[1]
|
||||
values := strings.Split(bwExpr, "=")
|
||||
status.Bandwidth, _ = strconv.ParseUint(values[1], 10, 64)
|
||||
|
||||
case "p":
|
||||
if words[1] == "accept" {
|
||||
status.Accept = true
|
||||
} else {
|
||||
status.Accept = false
|
||||
}
|
||||
status.PortList = strings.Join(words[2:], " ")
|
||||
}
|
||||
}
|
||||
|
||||
return status.Fingerprint, func() *RouterStatus { return status }, nil
|
||||
}
|
||||
|
||||
const (
|
||||
// The layout of the "published" field.
|
||||
publishedTimeLayout = "2006-01-02 15:04:05"
|
||||
)
|
||||
|
||||
// SanitiseFingerprint returns a sanitised version of the given fingerprint by
|
||||
// making it upper case and removing leading and trailing white spaces.
|
||||
func SanitiseFingerprint(fingerprint Fingerprint) Fingerprint {
|
||||
|
||||
sanitised := strings.ToUpper(strings.TrimSpace(string(fingerprint)))
|
||||
|
||||
return Fingerprint(sanitised)
|
||||
}
|
||||
|
||||
func parseIPv6AddressAndPort(addressAndPort string) (address net.IP, port uint16) {
|
||||
var ipV6regex = regexp.MustCompile(`\[(.*?)\]`)
|
||||
var ipV6portRegex = regexp.MustCompile(`\]:(.*)`)
|
||||
address = net.ParseIP(ipV6regex.FindStringSubmatch(addressAndPort)[1])
|
||||
port = StringToPort(ipV6portRegex.FindStringSubmatch(addressAndPort)[1])
|
||||
|
||||
return address, port
|
||||
}
|
||||
|
||||
// Convert the given port string to an unsigned 16-bit integer. If the
|
||||
// conversion fails or the number cannot be represented in 16 bits, 0 is
|
||||
// returned.
|
||||
func StringToPort(portStr string) uint16 {
|
||||
|
||||
portNum, err := strconv.ParseUint(portStr, 10, 16)
|
||||
if err != nil {
|
||||
return uint16(0)
|
||||
}
|
||||
|
||||
return uint16(portNum)
|
||||
}
|
||||
|
||||
func parseRouterFlags(flags []string) *RouterFlags {
|
||||
|
||||
var routerFlags = new(RouterFlags)
|
||||
|
||||
for _, flag := range flags {
|
||||
switch flag {
|
||||
case "Authority":
|
||||
routerFlags.Authority = true
|
||||
case "BadExit":
|
||||
routerFlags.BadExit = true
|
||||
case "Exit":
|
||||
routerFlags.Exit = true
|
||||
case "Fast":
|
||||
routerFlags.Fast = true
|
||||
case "Guard":
|
||||
routerFlags.Guard = true
|
||||
case "HSDir":
|
||||
routerFlags.HSDir = true
|
||||
case "Named":
|
||||
routerFlags.Named = true
|
||||
case "Stable":
|
||||
routerFlags.Stable = true
|
||||
case "Running":
|
||||
routerFlags.Running = true
|
||||
case "Unnamed":
|
||||
routerFlags.Unnamed = true
|
||||
case "Valid":
|
||||
routerFlags.Valid = true
|
||||
case "V2Dir":
|
||||
routerFlags.V2Dir = true
|
||||
}
|
||||
}
|
||||
|
||||
return routerFlags
|
||||
}
|
||||
|
||||
// Base64ToString decodes the given Base64-encoded string and returns the resulting string.
|
||||
// If there are errors during decoding, an error string is returned.
|
||||
func Base64ToString(encoded string) (string, error) {
|
||||
|
||||
// dir-spec.txt says that Base64 padding is removed so we have to account
|
||||
// for that here.
|
||||
if rem := len(encoded) % 4; rem != 0 {
|
||||
encoded += strings.Repeat("=", 4-rem)
|
||||
}
|
||||
|
||||
decoded, err := base64.StdEncoding.DecodeString(encoded)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return hex.EncodeToString(decoded), nil
|
||||
}
|
||||
|
||||
type QueueUnit struct {
|
||||
Blurb string
|
||||
Err error
|
||||
}
|
||||
|
||||
// Fingerprint represents a relay's fingerprint as 40 hex digits.
|
||||
type Fingerprint string
|
||||
|
||||
type GetStatus func() *RouterStatus
|
||||
|
||||
type RouterStatus struct {
|
||||
|
||||
// The single fields of an "r" line.
|
||||
Nickname string
|
||||
Fingerprint Fingerprint
|
||||
Digest string
|
||||
Publication time.Time
|
||||
|
||||
// The IPv4 and IPv6 fields of "a" line
|
||||
Address RouterAddress
|
||||
|
||||
// The single fields of an "s" line.
|
||||
Flags RouterFlags
|
||||
|
||||
// The single fields of a "v" line.
|
||||
TorVersion string
|
||||
|
||||
// The single fields of a "w" line.
|
||||
Bandwidth uint64
|
||||
Measured uint64
|
||||
Unmeasured bool
|
||||
|
||||
// The single fields of a "p" line.
|
||||
Accept bool
|
||||
PortList string
|
||||
}
|
||||
|
||||
type RouterFlags struct {
|
||||
Authority bool
|
||||
BadExit bool
|
||||
Exit bool
|
||||
Fast bool
|
||||
Guard bool
|
||||
HSDir bool
|
||||
Named bool
|
||||
Stable bool
|
||||
Running bool
|
||||
Unnamed bool
|
||||
Valid bool
|
||||
V2Dir bool
|
||||
}
|
||||
|
||||
type RouterAddress struct {
|
||||
IPv4Address net.IP
|
||||
IPv4ORPort uint16
|
||||
IPv4DirPort uint16
|
||||
|
||||
IPv6Address net.IP
|
||||
IPv6ORPort uint16
|
||||
}
|
||||
|
||||
type ConsensusDoc struct {
|
||||
// Generic map of consensus metadata
|
||||
MetaInfo map[string][]byte
|
||||
|
||||
// Document validity period
|
||||
ValidAfter time.Time
|
||||
FreshUntil time.Time
|
||||
ValidUntil time.Time
|
||||
|
||||
// Shared randomness
|
||||
sharedRandomnessPreviousValue []byte
|
||||
sharedRandomnessCurrentValue []byte
|
||||
|
||||
// A map from relay fingerprint to a function which returns the relay
|
||||
// status.
|
||||
Routers map[Fingerprint]GetStatus
|
||||
|
||||
// The spread score for HSDIR selection
|
||||
SpreadScore int
|
||||
}
|
||||
|
||||
// extractMetainfo extracts meta information of the open consensus document
|
||||
// (such as its validity times) and writes it to the provided consensus struct.
|
||||
// It assumes that the type annotation has already been read.
|
||||
func extractMetaInfo(br *bufio.Reader, c *ConsensusDoc) error {
|
||||
|
||||
c.MetaInfo = make(map[string][]byte)
|
||||
|
||||
// Read the initial metadata. We'll later extract information of particular
|
||||
// interest by name. The weird Reader loop is because scanner reads too much.
|
||||
for line, err := br.ReadSlice('\n'); ; line, err = br.ReadSlice('\n') {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// splits to (key, value)
|
||||
split := bytes.SplitN(line, []byte(" "), 2)
|
||||
if len(split) != 2 {
|
||||
return errors.New("malformed metainfo line")
|
||||
}
|
||||
|
||||
key := string(split[0])
|
||||
|
||||
logrus.Debug("[Consensus] ", key)
|
||||
|
||||
if key == "params" {
|
||||
splitParams := bytes.SplitAfter(line, []byte(" "))
|
||||
for _, v := range splitParams {
|
||||
if bytes.HasPrefix(v, []byte("hsdir_spread_store")) {
|
||||
splitInnerParams := bytes.SplitN(v, []byte("="), 2)
|
||||
if len(splitInnerParams) != 2 {
|
||||
return errors.New("malformed hsdir_spread_store param line! POTENTIAL CONSENSUS COMPROMISE")
|
||||
}
|
||||
c.SpreadScore, err = strconv.Atoi(strings.TrimSpace(string(splitInnerParams[1])))
|
||||
if err != nil {
|
||||
logrus.Panic("SpreadScore couldn't be parsed as int!", err)
|
||||
}
|
||||
p := Params()
|
||||
if c.SpreadScore != p.HsdirSpreadStore() {
|
||||
logrus.Debugf("[Consensus] Spread score set to %d", c.SpreadScore)
|
||||
p.SetHsdirSpreadStore(c.SpreadScore)
|
||||
}
|
||||
}
|
||||
logrus.Debugf("[Consensus][Params] %s", string(v))
|
||||
}
|
||||
} else {
|
||||
c.MetaInfo[key] = bytes.TrimSpace(split[1])
|
||||
}
|
||||
|
||||
// Look ahead to check if we've reached the end of the unique keys.
|
||||
nextKey, err := br.Peek(11)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if bytes.HasPrefix(nextKey, []byte("dir-source")) || bytes.HasPrefix(nextKey, []byte("fingerprint")) {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
var err error
|
||||
// Define a parser for validity timestamps
|
||||
parseTime := func(line []byte) (time.Time, error) {
|
||||
return time.Parse("2006-01-02 15:04:05", string(line))
|
||||
}
|
||||
|
||||
// Extract the validity period of this consensus
|
||||
c.ValidAfter, err = parseTime(c.MetaInfo["valid-after"])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.FreshUntil, err = parseTime(c.MetaInfo["fresh-until"])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.ValidUntil, err = parseTime(c.MetaInfo["valid-until"])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Reads a shared-rand line from the consensus and returns decoded bytes.
|
||||
parseRand := func(line []byte) ([]byte, error) {
|
||||
split := bytes.SplitN(line, []byte(" "), 2)
|
||||
if len(split) != 2 {
|
||||
return nil, errors.New("malformed shared random line")
|
||||
}
|
||||
// should split to (vote count, b64 bytes)
|
||||
_, rand := split[0], split[1]
|
||||
return base64.StdEncoding.DecodeString(string(rand))
|
||||
}
|
||||
|
||||
// Only the newer consensus documents have these values.
|
||||
if line, ok := c.MetaInfo["shared-rand-previous-value"]; ok {
|
||||
val, err := parseRand(line)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.sharedRandomnessPreviousValue = val
|
||||
}
|
||||
if line, ok := c.MetaInfo["shared-rand-current-value"]; ok {
|
||||
val, err := parseRand(line)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.sharedRandomnessCurrentValue = val
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Dissects the given file into string chunks by using the given string
|
||||
// extraction function. The resulting string chunks are then written to the
|
||||
// given queue where the receiving end parses them.
|
||||
func DissectFile(r io.Reader, extractor bufio.SplitFunc, queue chan QueueUnit) {
|
||||
|
||||
defer close(queue)
|
||||
|
||||
scanner := bufio.NewScanner(r)
|
||||
scanner.Split(extractor)
|
||||
|
||||
for scanner.Scan() {
|
||||
unit := scanner.Text()
|
||||
queue <- QueueUnit{unit, nil}
|
||||
}
|
||||
|
||||
if err := scanner.Err(); err != nil {
|
||||
queue <- QueueUnit{"", err}
|
||||
}
|
||||
}
|
||||
|
||||
// extractStatusEntry is a bufio.SplitFunc that extracts individual network
|
||||
// status entries.
|
||||
func extractStatusEntry(data []byte, atEOF bool) (advance int, token []byte, err error) {
|
||||
|
||||
if atEOF && len(data) == 0 {
|
||||
return 0, nil, nil
|
||||
}
|
||||
|
||||
start := 0
|
||||
if !bytes.HasPrefix(data, []byte("r ")) {
|
||||
start = bytes.Index(data, []byte("\nr "))
|
||||
if start < 0 {
|
||||
if atEOF {
|
||||
return 0, nil, fmt.Errorf("cannot find beginning of status entry: \"\\nr \"")
|
||||
}
|
||||
// Request more data.
|
||||
return 0, nil, nil
|
||||
}
|
||||
start++
|
||||
}
|
||||
|
||||
end := bytes.Index(data[start:], []byte("\nr "))
|
||||
if end >= 0 {
|
||||
return start + end + 1, data[start : start+end+1], nil
|
||||
}
|
||||
end = bytes.Index(data[start:], []byte("directory-signature"))
|
||||
if end >= 0 {
|
||||
// "directory-signature" means this is the last status; stop
|
||||
// scanning.
|
||||
return start + end, data[start : start+end], bufio.ErrFinalToken
|
||||
}
|
||||
if atEOF {
|
||||
return len(data), data[start:], errors.New("no status entry")
|
||||
}
|
||||
// Request more data.
|
||||
return 0, nil, nil
|
||||
}
|
@ -0,0 +1,23 @@
|
||||
package onionbalance
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestParseIPv6AddressAndPort(t *testing.T) {
|
||||
_, getStatus, err := ParseRawStatus(`r Karlstad0 m5TNC3uAV+ryG6fwI7ehyMqc5kU f1g9KQhgS0r6+H/7dzAJOpi6lG8 2014-12-08 06:57:54 193.11.166.194 9000 80
|
||||
a [2002:470:6e:80d::2]:22
|
||||
s Fast Guard HSDir Running Stable V2Dir Valid
|
||||
v Tor 0.2.4.23
|
||||
w Bandwidth=2670
|
||||
p reject 1-65535`)
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
if getStatus().Address.IPv6Address.String() != "2002:470:6e:80d::2" {
|
||||
t.Error("Failes to Parse IPv6 Address correctly.")
|
||||
}
|
||||
|
||||
if getStatus().Address.IPv6ORPort != StringToPort("22") {
|
||||
t.Error("Failes to Parse IPv6 Port correctly.")
|
||||
}
|
||||
}
|
659
endgamefiles/sourcecode/gobalance/pkg/onionbalance/controller.go
Normal file
659
endgamefiles/sourcecode/gobalance/pkg/onionbalance/controller.go
Normal file
@ -0,0 +1,659 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"crypto/ed25519"
|
||||
"crypto/hmac"
|
||||
"crypto/sha256"
|
||||
"encoding/base64"
|
||||
"encoding/binary"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/sirupsen/logrus"
|
||||
"gobalance/pkg/brand"
|
||||
"golang.org/x/crypto/sha3"
|
||||
"io"
|
||||
"net"
|
||||
"os"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
//type Router struct {
|
||||
// RelayFpr string
|
||||
// MicrodescriptorDigest string
|
||||
// Fingerprint string
|
||||
// Protocols map[string][]int64
|
||||
// Flags []string
|
||||
//}
|
||||
//
|
||||
//type ConsensusDoc struct {
|
||||
// ValidAfter time.Time
|
||||
// ValidUntil time.Time
|
||||
// sharedRandomnessPreviousValue *string
|
||||
// sharedRandomnessCurrentValue *string
|
||||
// Routers []Router
|
||||
//}
|
||||
|
||||
var ErrSocketClosed = errors.New("socket closed")
|
||||
|
||||
// Return the start time of the upcoming time period
|
||||
func (c ConsensusDoc) GetStartTimeOfNextTimePeriod(validAfter int64) int64 {
|
||||
// Get start time of next time period
|
||||
timePeriodLength := c.GetTimePeriodLength()
|
||||
nextTimePeriodNum := c.getNextTimePeriodNum(validAfter)
|
||||
startOfNextTpInMins := nextTimePeriodNum * timePeriodLength
|
||||
// Apply rotation offset as specified by prop224 section [TIME-PERIODS]
|
||||
timePeriodRotationOffset := getSrvPhaseDuration()
|
||||
return (startOfNextTpInMins + timePeriodRotationOffset) * 60
|
||||
}
|
||||
|
||||
func (c ConsensusDoc) GetPreviousSrv(timePeriodNum int64) []byte {
|
||||
if c.sharedRandomnessPreviousValue != nil {
|
||||
return c.sharedRandomnessPreviousValue
|
||||
} else if timePeriodNum != 0 {
|
||||
logrus.Info("SRV not found so falling back to disaster mode")
|
||||
return c.getDisasterSrv(timePeriodNum)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c ConsensusDoc) GetCurrentSrv(timePeriodNum int64) []byte {
|
||||
if c.sharedRandomnessCurrentValue != nil {
|
||||
return c.sharedRandomnessCurrentValue
|
||||
} else if timePeriodNum != 0 {
|
||||
logrus.Info("SRV not found so falling back to disaster mode")
|
||||
return c.getDisasterSrv(timePeriodNum)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c ConsensusDoc) GetStartTimeOfCurrentSrvRun() int64 {
|
||||
beginningOfCurrentRound := c.ValidAfter.Unix()
|
||||
votingIntervalSecs := int64(60 * 60)
|
||||
currRoundSlot := (beginningOfCurrentRound / votingIntervalSecs) % 24
|
||||
timeElapsedSinceStartOfRun := currRoundSlot * votingIntervalSecs
|
||||
logrus.Debugf("Current SRV proto run: Start of current round: %d. Time elapsed: %d (%d)\n", beginningOfCurrentRound,
|
||||
timeElapsedSinceStartOfRun, votingIntervalSecs)
|
||||
return beginningOfCurrentRound - timeElapsedSinceStartOfRun
|
||||
}
|
||||
|
||||
func (c ConsensusDoc) GetStartTimeOfPreviousSrvRun() int64 {
|
||||
startTimeOfCurrentRun := c.GetStartTimeOfCurrentSrvRun()
|
||||
return startTimeOfCurrentRun - 24*3600
|
||||
}
|
||||
|
||||
func (c ConsensusDoc) GetBlindingParam(identityPubkey ed25519.PublicKey, timePeriodNumber int64) []byte {
|
||||
Ed25519Basepoint := "(15112221349535400772501151409588531511" +
|
||||
"454012693041857206046113283949847762202, " +
|
||||
"463168356949264781694283940034751631413" +
|
||||
"07993866256225615783033603165251855960)"
|
||||
BlindString := "Derive temporary signing key\x00"
|
||||
periodLength := c.GetTimePeriodLength()
|
||||
data1 := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(data1[len(data1)-8:], uint64(timePeriodNumber))
|
||||
data2 := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(data2[len(data2)-8:], uint64(periodLength))
|
||||
N := "key-blind" + string(data1) + string(data2)
|
||||
toEnc := []byte(BlindString + string(identityPubkey) + Ed25519Basepoint + N)
|
||||
tmp := sha3.Sum256(toEnc)
|
||||
return tmp[:]
|
||||
}
|
||||
|
||||
// Return disaster SRV for 'timePeriodNum'.
|
||||
func (c ConsensusDoc) getDisasterSrv(timePeriodNum int64) []byte {
|
||||
timePeriodLength := c.GetTimePeriodLength()
|
||||
data := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(data[len(data)-8:], uint64(timePeriodLength))
|
||||
data1 := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(data1[len(data1)-8:], uint64(timePeriodNum))
|
||||
disasterBody := "shared-random-disaster" + string(data) + string(data1)
|
||||
s := sha3.Sum256([]byte(disasterBody))
|
||||
return s[:]
|
||||
}
|
||||
|
||||
func (c ConsensusDoc) getNextTimePeriodNum(validAfter int64) int64 {
|
||||
return c.GetTimePeriodNum(validAfter) + 1
|
||||
}
|
||||
|
||||
// GetTimePeriodLength get the HSv3 time period length in minutes
|
||||
func (c ConsensusDoc) GetTimePeriodLength() int64 {
|
||||
return 24 * 60
|
||||
}
|
||||
|
||||
func getSrvPhaseDuration() int64 {
|
||||
return 12 * 60
|
||||
}
|
||||
|
||||
// GetTimePeriodNum get time period number for this 'valid_after'.
|
||||
//
|
||||
// valid_after is a datetime (if not set, we get it ourselves)
|
||||
// time_period_length set to default value of 1440 minutes == 1 day
|
||||
func (c ConsensusDoc) GetTimePeriodNum(validAfter int64) int64 {
|
||||
timePeriodLength := c.GetTimePeriodLength()
|
||||
secondsSinceEpoch := validAfter
|
||||
minutesSinceEpoch := secondsSinceEpoch / 60
|
||||
// Calculate offset as specified in rend-spec-v3.txt [TIME-PERIODS]
|
||||
timePeriodRotationOffset := getSrvPhaseDuration()
|
||||
// assert(minutes_since_epoch > time_period_rotation_offset)
|
||||
minutesSinceEpoch -= timePeriodRotationOffset
|
||||
timePeriodNum := minutesSinceEpoch / timePeriodLength
|
||||
return timePeriodNum
|
||||
}
|
||||
|
||||
type Controller struct {
|
||||
host string
|
||||
port int
|
||||
password string
|
||||
conn net.Conn
|
||||
connMtx sync.Mutex
|
||||
events chan string
|
||||
msgs chan string
|
||||
}
|
||||
|
||||
func NewController(host string, port int, torPassword string) *Controller {
|
||||
c := new(Controller)
|
||||
c.host = host
|
||||
c.port = port
|
||||
c.password = torPassword
|
||||
c.MustDial()
|
||||
c.launchThreads()
|
||||
if err := c.protocolAuth(); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
//_ = c.SetEvents()
|
||||
return c
|
||||
}
|
||||
|
||||
var reauthMtx sync.Mutex
|
||||
|
||||
func (c *Controller) ReAuthenticate() {
|
||||
if !reauthMtx.TryLock() {
|
||||
logrus.Error("re-authenticate already in progress")
|
||||
time.Sleep(10 * time.Second)
|
||||
return
|
||||
}
|
||||
defer reauthMtx.Unlock()
|
||||
for {
|
||||
time.Sleep(10 * time.Second)
|
||||
var err error
|
||||
|
||||
if err = c.Dial(); err != nil {
|
||||
logrus.Error("Failed to re-authenticate controller.")
|
||||
continue
|
||||
}
|
||||
|
||||
go c.connScannerThread()
|
||||
if err := c.protocolAuth(); err != nil {
|
||||
logrus.Error("Failed to re-authenticate controller.")
|
||||
c.closeConn()
|
||||
continue
|
||||
}
|
||||
if err := c.SetEvents(); err != nil {
|
||||
logrus.Error("Failed to re-authenticate controller.")
|
||||
c.closeConn()
|
||||
continue
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Controller) protocolAuth() error {
|
||||
protocolInfo, err := c.ProtocolInfo()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if protocolInfo.IsHashedPassword {
|
||||
if err := c.Auth(c.password); err != nil {
|
||||
return err
|
||||
}
|
||||
} else if protocolInfo.CookieContent != nil {
|
||||
if err := c.AuthWithCookie(protocolInfo.CookieContent); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if err := c.Auth(c.password); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
logrus.Debug("Successfully authenticated on the Tor control connection.")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Return True if 'onion_address' is one of our instances.
|
||||
func (b *Onionbalance) addressIsInstance(onionAddress string) bool {
|
||||
for _, service := range b.GetServices() {
|
||||
for _, instance := range service.GetInstances() {
|
||||
if instance.hasOnionAddress(onionAddress) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (b *Onionbalance) AddressIsFrontend(onionAddress string) bool {
|
||||
for _, service := range b.GetServices() {
|
||||
if service.hasOnionAddress(onionAddress) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// A wrapper for this control port event (see above)
|
||||
// https://github.com/torproject/torspec/blob/4da63977b86f4c17d0e8cf87ed492c72a4c9b2d9/control-spec.txt#L3594
|
||||
func (b *Onionbalance) handleNewDescEventWrapper(statusEvent string) {
|
||||
// HS_DESC Action HSAddress AuthType HsDir
|
||||
// HS_DESC RECEIVED o5fke5yq63krmfy5nxqatnykru664qgohrvhzalielqavpo4sut6kvad NO_AUTH $3D1BBDB539FAACA19EC27334DC6D08FD68D82775~alan 35D0MMu7YxXqhlV/u4uQ26qdT/jZXH1Ua2eYDXnavFs
|
||||
// HS_DESC UPLOADED o5fke5yq63krmfy5nxqatnykru664qgohrvhzalielqavpo4sut6kvad UNKNOWN $6A51575EFF4DC40CE8D97169E0F0AC9DE97E8B69~a9RelayMIA
|
||||
// HS_DESC REQUESTED dkforestseeaaq2dqz2uflmlsybvnq2irzn4ygyvu53oazyorednviid NO_AUTH $B7327B559CA1531D182386E21B4868FCB7F0F456~Maine obnMXJfQ9YhQ2ekm6uLiAu4TICHx1EeM5+DYVvvo480 HSDIR_INDEX=04F61F2A8367AED55A6E7FC1906AAFA8FC2610D9A8E96A02E9792FC53857D10D
|
||||
// HS_DESC FAILED xa5mofmlp2iwsapc6cskc4uflvcon2f4j2fbklycjk55e4bkqmxblyyd NO_AUTH $12CB4C0E78A71C846069605361B1E1FF528E1AF0~bammbamm OnxmaOKfU5mbR02QgVXrLh16/33MsrZmt7URcL0sffI REASON=UPLOAD_REJECTED
|
||||
p := Params()
|
||||
words := strings.Split(statusEvent, " ")
|
||||
action := words[1]
|
||||
hsAddress := words[2]
|
||||
// authType := words[3]
|
||||
hsDir := words[4]
|
||||
if action == "RECEIVED" {
|
||||
return // We already log in HS_DESC_CONTENT so no need to do it here too
|
||||
} else if action == "UPLOADED" {
|
||||
logrus.Infof("Successfully uploaded descriptor for %s to %s", hsAddress, hsDir)
|
||||
} else if action == "FAILED" {
|
||||
adaptHSDirFailureCount := p.AdaptHSDirFailureCount()
|
||||
p.SetAdaptHSDirFailureCount(adaptHSDirFailureCount + 1)
|
||||
reason := "REASON NULL"
|
||||
if len(words) >= 6 {
|
||||
reason = words[6]
|
||||
}
|
||||
|
||||
if b.addressIsInstance(hsAddress) {
|
||||
adaptFetchFail := p.AdaptFetchFail()
|
||||
p.SetAdaptFetchFail(adaptFetchFail + 1)
|
||||
logrus.Infof("Descriptor fetch failed for instance %s from %s (%s)", hsAddress, hsDir, reason)
|
||||
} else if b.AddressIsFrontend(hsAddress) {
|
||||
adaptDescriptorFail := p.AdaptDescriptorFail()
|
||||
p.SetAdaptDescriptorFail(adaptDescriptorFail + 1)
|
||||
logrus.Warningf("Descriptor upload failed for frontend %s to %s (%s)", hsAddress, hsDir, reason)
|
||||
} else {
|
||||
logrus.Warningf("Descriptor action failed for unknown service %s to %s (%s)", hsAddress, hsDir, reason)
|
||||
}
|
||||
} else if action == "REQUESTED" {
|
||||
logrus.Debugf("Requested descriptor for %s from %s...", hsAddress, hsDir)
|
||||
}
|
||||
}
|
||||
|
||||
// https://github.com/torproject/torspec/blob/4da63977b86f4c17d0e8cf87ed492c72a4c9b2d9/control-spec.txt#L3664
|
||||
func (b *Onionbalance) handleNewDescContentEventWrapper(statusEvent string) {
|
||||
/*
|
||||
o5fke5yq63krmfy5nxqatnykru664qgohrvhzalielqavpo4sut6kvad 35D0MMu7YxXqhlV/u4uQ26qdT/jZXH1Ua2eYDXnavFs $14A1D6B6F417DEC38BB05A3FFAD566F6E003E0D9~quartzyrelay
|
||||
hs-descriptor 3
|
||||
descriptor-lifetime 180
|
||||
descriptor-signing-key-cert
|
||||
-----BEGIN ED25519 CERT-----
|
||||
AQgABvm2AU9N5AzUVIwCITJ2J4Cj/EbgUPKA74jCUsSG3a6Dg+BuAQAgBADfkPQw
|
||||
y7tjFeqGVX+7i5Dbqp1P+NlcfVRrZ5gNedq8W/V3lx6ZWy4kSjsHUPz5mJjEnay/
|
||||
yxBpz2MPh7Key9TtMX3kkOV+YSdVVEj3RYZDFO3L2d41pfsOyofmSVscEg0=
|
||||
-----END ED25519 CERT-----
|
||||
revision-counter 3767530536
|
||||
superencrypted
|
||||
-----BEGIN MESSAGE-----
|
||||
4irIE1RXoopvgBEHohhUfv4s1p0wKRK0CJ86fB9CoxkAO6MkJl/QQMvM4XvLbTe+
|
||||
IsvKSujhPsrMxeJywS02wUrKNyEPYsb229l7mYLsHCTcp/Yr4EjFVlgt9QC7x7p0
|
||||
4h3EsUT1izNY8p72LV5k7A==
|
||||
-----END MESSAGE-----
|
||||
signature ivnFALhtO63SlCUj6sZDzllUGGZzuh9MnqOGyr3tU6O2MXVsQpQL7QJLavU1/4c5ITUsX90Bov20mCHSwKNODw
|
||||
*/
|
||||
p := Params()
|
||||
if p.AdaptWgEnabled() {
|
||||
p.AdaptWg().Done()
|
||||
adaptWgCount := p.AdaptWgCount()
|
||||
p.SetAdaptWgCount(adaptWgCount - 1)
|
||||
logrus.Debugf("Adapt waitgroup count: %d", p.AdaptWgCount())
|
||||
}
|
||||
lines := strings.SplitN(statusEvent, "\n", 2)
|
||||
descriptorText := lines[1]
|
||||
words := strings.Split(lines[0], " ")
|
||||
hsAddress := words[1]
|
||||
//DescId := words[2]
|
||||
//HsDir := words[3]
|
||||
//Descriptor := words[4:]
|
||||
for _, inst := range b.getAllInstances() {
|
||||
if inst.OnionAddress == hsAddress {
|
||||
inst.registerDescriptor(descriptorText, hsAddress)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Parse Tor status events such as "STATUS_GENERAL"
|
||||
// STATUS_CLIENT NOTICE CONSENSUS_ARRIVED
|
||||
func (b *Onionbalance) handleNewStatusEventWrapper(statusEvent string) {
|
||||
p := Params()
|
||||
words := strings.Split(statusEvent, " ")
|
||||
action := words[2]
|
||||
if action == "CONSENSUS_ARRIVED" {
|
||||
logrus.Debug("Received new consensus!")
|
||||
b.consensus.refresh()
|
||||
// Call all callbacks to pull from the latest consensus!
|
||||
p.FetchChannel <- true
|
||||
time.Sleep(10 * time.Second)
|
||||
p.PublishChannel <- true
|
||||
}
|
||||
}
|
||||
|
||||
// https://github.com/torproject/torspec/blob/4da63977b86f4c17d0e8cf87ed492c72a4c9b2d9/dir-spec.txt#L1642
|
||||
func (c *Controller) launchThreads() {
|
||||
c.events = make(chan string, 1000)
|
||||
c.msgs = make(chan string, 1000)
|
||||
go c.eventsHandlerThread()
|
||||
go c.connScannerThread()
|
||||
}
|
||||
|
||||
func (c *Controller) closeConn() {
|
||||
c.connMtx.Lock()
|
||||
defer c.connMtx.Unlock()
|
||||
if err := c.conn.Close(); err != nil {
|
||||
logrus.Error(err)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Controller) connWrite(msg string) error {
|
||||
c.connMtx.Lock()
|
||||
defer c.connMtx.Unlock()
|
||||
if _, err := c.conn.Write([]byte(msg)); err != nil {
|
||||
return ErrSocketClosed
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Controller) Msg(msg string) (string, error) {
|
||||
if err := c.connWrite(msg); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
var res string
|
||||
select {
|
||||
case res = <-c.msgs:
|
||||
case <-time.After(5 * time.Second):
|
||||
logrus.Error("timed out trying to receive message from Tor control")
|
||||
return "", ErrSocketClosed
|
||||
}
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func (c *Controller) eventsHandlerThread() {
|
||||
for msg := range c.events {
|
||||
if strings.HasPrefix(msg, "650 ") {
|
||||
msg = strings.TrimPrefix(msg, "650 ")
|
||||
} else if strings.HasPrefix(msg, "650+") {
|
||||
msg = strings.TrimPrefix(msg, "650+")
|
||||
}
|
||||
words := strings.Split(msg, " ")
|
||||
if words[0] == "HS_DESC" {
|
||||
OnionBalance().handleNewDescEventWrapper(msg)
|
||||
} else if words[0] == "HS_DESC_CONTENT" {
|
||||
OnionBalance().handleNewDescContentEventWrapper(msg)
|
||||
} else if words[0] == "STATUS_CLIENT" {
|
||||
OnionBalance().handleNewStatusEventWrapper(msg)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Controller) connScannerThread() {
|
||||
clb := func(msg string) {
|
||||
if strings.HasPrefix(msg, "650") {
|
||||
c.events <- msg
|
||||
} else {
|
||||
c.msgs <- msg
|
||||
}
|
||||
}
|
||||
connScannerThread(c.conn, clb)
|
||||
logrus.Error("Tor control connection lost")
|
||||
c.closeConn()
|
||||
}
|
||||
|
||||
func connScannerThread(r io.Reader, clb func(string)) {
|
||||
scanner := bufio.NewScanner(r)
|
||||
firstLine := true
|
||||
firstLineCode := ""
|
||||
var sb strings.Builder
|
||||
for scanner.Scan() {
|
||||
line := scanner.Text()
|
||||
if firstLine {
|
||||
sb.WriteString(line)
|
||||
sb.WriteString("\n")
|
||||
firstLineCode = line[0:3]
|
||||
if line[3] != ' ' { // "650 " "650+" "250 " "250-"
|
||||
firstLine = false
|
||||
continue
|
||||
}
|
||||
} else if line != firstLineCode+" OK" {
|
||||
sb.WriteString(line)
|
||||
sb.WriteString("\n")
|
||||
continue
|
||||
}
|
||||
|
||||
res := strings.TrimSpace(sb.String())
|
||||
clb(res)
|
||||
firstLine = true
|
||||
sb.Reset()
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Controller) MustDial() {
|
||||
if err := c.Dial(); err != nil {
|
||||
logrus.Fatalf("Unable to connect to Tor control port: %s:%d; %v", c.host, c.port, err)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Controller) Dial() error {
|
||||
conn, err := dial(c.host, c.port)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logrus.Debug("Successfully connected to the Tor control port.")
|
||||
|
||||
c.connMtx.Lock()
|
||||
defer c.connMtx.Unlock()
|
||||
c.conn = conn
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func dial(host string, port int) (net.Conn, error) {
|
||||
conn, err := net.Dial("tcp", host+":"+strconv.Itoa(port))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return conn, nil
|
||||
}
|
||||
|
||||
func (c *Controller) Auth(password string) error {
|
||||
msg, err := c.Msg(fmt.Sprintf("AUTHENTICATE \"%s\"\n", password))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if msg != "250 OK" {
|
||||
return fmt.Errorf("failed to AUTHENTICATE: %s", msg)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Controller) AuthWithCookie(cookieContent []byte) error {
|
||||
clientNonceBytes := make([]byte, 32)
|
||||
_, _ = brand.Read(clientNonceBytes)
|
||||
clientNonce := strings.ToUpper(hex.EncodeToString(clientNonceBytes))
|
||||
msg, err := c.Msg(fmt.Sprintf("AUTHCHALLENGE SAFECOOKIE %s\n", clientNonce))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
rgx := regexp.MustCompile(`SERVERNONCE=(\S+)`)
|
||||
m := rgx.FindStringSubmatch(msg)
|
||||
if len(m) != 2 {
|
||||
panic("failed to get server nonce")
|
||||
}
|
||||
serverNonce := m[1]
|
||||
cookieString := strings.ToUpper(hex.EncodeToString(cookieContent))
|
||||
toHash := fmt.Sprintf("%s%s%s\n", cookieString, clientNonce, serverNonce)
|
||||
toHashBytes, _ := hex.DecodeString(toHash)
|
||||
h := hmac.New(sha256.New, []byte("Tor safe cookie authentication controller-to-server hash"))
|
||||
h.Write(toHashBytes)
|
||||
sha := strings.ToUpper(hex.EncodeToString(h.Sum(nil)))
|
||||
msg, err = c.Msg(fmt.Sprintf("AUTHENTICATE %s\n", sha))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if msg != "250 OK" {
|
||||
return fmt.Errorf("%s", msg)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Controller) SetEvents() error {
|
||||
_, err := c.Msg("SETEVENTS SIGNAL CONF_CHANGED STATUS_SERVER STATUS_CLIENT HS_DESC HS_DESC_CONTENT\n")
|
||||
return err
|
||||
}
|
||||
|
||||
func (c *Controller) GetInfo(s string) (string, error) {
|
||||
return c.Msg(fmt.Sprintf("GETINFO %s\n", s))
|
||||
}
|
||||
|
||||
func (c *Controller) Ip2Country(ip string) (string, error) {
|
||||
line, err := c.Msg(fmt.Sprintf("GETINFO ip-to-country/%s\n", ip))
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
rgx := regexp.MustCompile(`^250-ip-to-country/[^=]+=(\w+)$`)
|
||||
m := rgx.FindStringSubmatch(line)
|
||||
if len(m) != 2 {
|
||||
return "", errors.New("failed to get country: " + string(line))
|
||||
}
|
||||
return m[1], nil
|
||||
}
|
||||
|
||||
func (c *Controller) HSFetch(addr string) error {
|
||||
line, err := c.Msg(fmt.Sprintf("HSFETCH %s\n", addr))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if line != "250 OK" {
|
||||
return errors.New(line)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Controller) HSPost(addr string) error {
|
||||
_, err := c.Msg(fmt.Sprintf("+HSPOST HSADDRESS=%s\r\n%s\r\n.\r\n", strings.TrimRight(addr, ".onion"), "descriptor"))
|
||||
return err
|
||||
}
|
||||
|
||||
func (c *Controller) GetMdConsensus() (string, error) {
|
||||
return c.GetInfo("dir/status-vote/current/consensus-microdesc")
|
||||
}
|
||||
|
||||
type MicroDescriptor struct {
|
||||
Identifiers map[string]string // string -> base64
|
||||
|
||||
raw string
|
||||
}
|
||||
|
||||
func (m *MicroDescriptor) Digest() string {
|
||||
h := sha256.New()
|
||||
h.Write([]byte(m.raw))
|
||||
src := h.Sum(nil)
|
||||
return strings.TrimRight(base64.StdEncoding.EncodeToString(src), "=")
|
||||
}
|
||||
|
||||
func (c *Controller) GetMicrodescriptors() ([]MicroDescriptor, error) {
|
||||
out := make([]MicroDescriptor, 0)
|
||||
|
||||
mdAll, err := c.GetInfo("md/all")
|
||||
if err != nil {
|
||||
return out, err
|
||||
}
|
||||
lines := strings.Split(mdAll, "\n")
|
||||
lines = lines[1 : len(lines)-1]
|
||||
_ = os.WriteFile("logs/mdAll.txt", []byte(strings.Join(lines, "\n")), 0644)
|
||||
|
||||
desc := ""
|
||||
for _, line := range lines {
|
||||
if line == "onion-key" {
|
||||
if desc != "" {
|
||||
out = append(out, MicroDescriptor{raw: desc, Identifiers: make(map[string]string)})
|
||||
}
|
||||
desc = line + "\n"
|
||||
} else {
|
||||
desc += line + "\n"
|
||||
}
|
||||
}
|
||||
out = append(out, MicroDescriptor{raw: desc, Identifiers: make(map[string]string)})
|
||||
|
||||
for idx := range out {
|
||||
lines := strings.Split(out[idx].raw, "\n")
|
||||
for _, line := range lines {
|
||||
// id ed25519 ufqCAi2Oqasmu67Dm0Ugru+Nk4xxCADXFj6RwdQk4WY
|
||||
if strings.HasPrefix(line, "id ed25519 ") {
|
||||
out[idx].Identifiers["ed25519"] = strings.TrimPrefix(line, "id ed25519 ")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return out, nil
|
||||
}
|
||||
|
||||
type ProtocolInfoStruct struct {
|
||||
IsHashedPassword bool
|
||||
CookieContent []byte
|
||||
}
|
||||
|
||||
func (c *Controller) ProtocolInfo() (out ProtocolInfoStruct, err error) {
|
||||
msg, err := c.Msg("PROTOCOLINFO\n")
|
||||
if err != nil {
|
||||
return out, err
|
||||
}
|
||||
lines := strings.Split(msg, "\n")
|
||||
if len(lines) != 3 {
|
||||
panic(msg)
|
||||
}
|
||||
if strings.Contains(lines[1], "NULL") {
|
||||
} else if strings.Contains(lines[1], "HASHEDPASSWORD") {
|
||||
out.IsHashedPassword = true
|
||||
} else if strings.Contains(lines[1], "COOKIE") {
|
||||
rgx := regexp.MustCompile(`250-AUTH METHODS=COOKIE,SAFECOOKIE COOKIEFILE="([^"]+)"`)
|
||||
m := rgx.FindStringSubmatch(lines[1])
|
||||
if len(m) != 2 {
|
||||
panic("failed to get cookie path")
|
||||
}
|
||||
cookiePath := m[1]
|
||||
cookieBytes, err := os.ReadFile(cookiePath)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
out.CookieContent = cookieBytes
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (c *Controller) GetVersion() string {
|
||||
versionStr, _ := c.GetInfo("version")
|
||||
versionStr = strings.TrimPrefix(versionStr, "250-version=")
|
||||
return versionStr
|
||||
}
|
||||
|
||||
func (c *Controller) Signal(signal string) (string, error) {
|
||||
return c.Msg(fmt.Sprintf("SIGNAL %s\n", signal))
|
||||
}
|
||||
|
||||
func (c *Controller) MarkTorAsActive() {
|
||||
_, _ = c.Signal("ACTIVE")
|
||||
}
|
||||
|
||||
// GetHiddenServiceDescriptor We need a way to await these results.
|
||||
func (c *Controller) GetHiddenServiceDescriptor(address string) error {
|
||||
return c.HSFetch(address)
|
||||
}
|
@ -0,0 +1,33 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"github.com/stretchr/testify/assert"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestConnScannerThread(t *testing.T) {
|
||||
r := strings.NewReader(`650+HS_DESC_CONTENT line1
|
||||
line2
|
||||
line3
|
||||
650 OK
|
||||
650 HS_DESC line1
|
||||
250 OK`)
|
||||
var msg1, msg2, msg3 string
|
||||
var msgCount int
|
||||
clb := func(msg string) {
|
||||
msgCount++
|
||||
if msgCount == 1 {
|
||||
msg1 = msg
|
||||
} else if msgCount == 2 {
|
||||
msg2 = msg
|
||||
} else if msgCount == 3 {
|
||||
msg3 = msg
|
||||
}
|
||||
}
|
||||
connScannerThread(r, clb)
|
||||
assert.Equal(t, 3, msgCount)
|
||||
assert.Equal(t, msg1, "650+HS_DESC_CONTENT line1\nline2\nline3")
|
||||
assert.Equal(t, msg2, "650 HS_DESC line1")
|
||||
assert.Equal(t, msg3, "250 OK")
|
||||
}
|
235
endgamefiles/sourcecode/gobalance/pkg/onionbalance/descriptor.go
Normal file
235
endgamefiles/sourcecode/gobalance/pkg/onionbalance/descriptor.go
Normal file
@ -0,0 +1,235 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"crypto/aes"
|
||||
"crypto/cipher"
|
||||
"crypto/ed25519"
|
||||
"github.com/sirupsen/logrus"
|
||||
"gobalance/pkg/brand"
|
||||
"gobalance/pkg/btime"
|
||||
"gobalance/pkg/gobpk"
|
||||
"gobalance/pkg/stem/descriptor"
|
||||
"golang.org/x/crypto/sha3"
|
||||
"time"
|
||||
)
|
||||
|
||||
// V3Descriptor a generic v3 descriptor.
|
||||
// Serves as the base class for OBDescriptor and ReceivedDescriptor which
|
||||
// implement more specific functionalities.
|
||||
type V3Descriptor struct {
|
||||
onionAddress string
|
||||
v3Desc *descriptor.HiddenServiceDescriptorV3
|
||||
introSet *IntroductionPointSetV3
|
||||
}
|
||||
|
||||
// GetIntroPoints get the raw intro points for this descriptor.
|
||||
func (d *V3Descriptor) GetIntroPoints() []descriptor.IntroductionPointV3 {
|
||||
return d.introSet.getIntroPointsFlat()
|
||||
}
|
||||
|
||||
// Extract and return the blinded key from the descriptor
|
||||
func (d *V3Descriptor) getBlindedKey() ed25519.PublicKey {
|
||||
// The descriptor signing cert, signs the descriptor signing key using
|
||||
// the blinded key. So the signing key should be the one we want here.
|
||||
return d.v3Desc.SigningCert.SigningKey()
|
||||
}
|
||||
|
||||
// ReceivedDescriptor an instance v3 descriptor received from the network.
|
||||
// This class supports parsing descriptors.
|
||||
type ReceivedDescriptor struct {
|
||||
V3Descriptor
|
||||
receivedTs *time.Time
|
||||
}
|
||||
|
||||
// NewReceivedDescriptor parse a descriptor in 'desc_text' and return an ReceivedDescriptor object.
|
||||
// Raises BadDescriptor if the descriptor cannot be used.
|
||||
func NewReceivedDescriptor(descText, onionAddress string) (*ReceivedDescriptor, error) {
|
||||
d := &ReceivedDescriptor{}
|
||||
v3Desc := &descriptor.HiddenServiceDescriptorV3{}
|
||||
|
||||
v3Desc.FromStr(descText)
|
||||
if _, err := v3Desc.Decrypt(onionAddress); err != nil {
|
||||
logrus.Warnf("Descriptor is corrupted (%s).", onionAddress)
|
||||
return nil, ErrBadDescriptor
|
||||
}
|
||||
tmp := btime.Clock.Now().UTC()
|
||||
d.receivedTs = &tmp
|
||||
logrus.Debugf("Successfuly decrypted descriptor for %s!", onionAddress)
|
||||
|
||||
d.onionAddress = onionAddress
|
||||
d.v3Desc = v3Desc
|
||||
p := Params()
|
||||
nIntroduction := p.NIntroduction() + int64(len(d.v3Desc.InnerLayer.IntroductionPoints))
|
||||
p.SetNIntroduction(nIntroduction)
|
||||
// An IntroductionPointSetV3 object with the intros of this descriptor
|
||||
logrus.Debugf("New Descriptor Received for %s (%d introduction points)", onionAddress, len(d.v3Desc.InnerLayer.IntroductionPoints))
|
||||
d.introSet = NewIntroductionPointSetV3([][]descriptor.IntroductionPointV3{d.v3Desc.InnerLayer.IntroductionPoints})
|
||||
logrus.Debugf("Introduction count, %d", p.NIntroduction())
|
||||
return d, nil
|
||||
}
|
||||
|
||||
// IsOld returns True if this received descriptor is old. If so, we should consider the
|
||||
// instance as offline.
|
||||
func (d *ReceivedDescriptor) IsOld() bool {
|
||||
p := Params()
|
||||
receivedAge := btime.Clock.Now().UTC().Sub(*d.receivedTs).Nanoseconds()
|
||||
if receivedAge > p.InstanceDescriptorTooOld() {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
type OBDescriptor struct {
|
||||
V3Descriptor
|
||||
lastPublishAttemptTs *time.Time
|
||||
lastUploadTs *time.Time
|
||||
responsibleHsdirs []string
|
||||
consensus *ConsensusDoc
|
||||
}
|
||||
|
||||
// NewOBDescriptor A v3 descriptor created by OnionBalance and meant to be published to the
|
||||
// network.
|
||||
// This class supports generating descriptors.
|
||||
// Can raise BadDescriptor if we can't or should not generate a valid descriptor
|
||||
func NewOBDescriptor(onionAddress string, identityPrivKey gobpk.PrivateKey, blindingParam []byte, introPoints []descriptor.IntroductionPointV3, isFirstDesc bool, consensus *ConsensusDoc) (*OBDescriptor, error) {
|
||||
d := &OBDescriptor{}
|
||||
d.consensus = consensus
|
||||
// Timestamp of the last attempt to assemble this descriptor
|
||||
d.lastPublishAttemptTs = nil
|
||||
// Timestamp we last uploaded this descriptor
|
||||
d.lastUploadTs = nil
|
||||
// Set of responsible HSDirs for last time we uploaded this descriptor
|
||||
d.responsibleHsdirs = nil
|
||||
|
||||
// Start generating descriptor
|
||||
_, descSigningKey, _ := ed25519.GenerateKey(brand.Reader())
|
||||
|
||||
// Get the intro points for this descriptor and recertify them!
|
||||
recertifiedIntroPoints := make([]descriptor.IntroductionPointV3, 0)
|
||||
|
||||
for _, ip := range introPoints {
|
||||
rec := d.recertifyIntroPoint(ip, descSigningKey)
|
||||
recertifiedIntroPoints = append(recertifiedIntroPoints, rec)
|
||||
}
|
||||
|
||||
revCounter := d.getRevisionCounter(identityPrivKey, isFirstDesc)
|
||||
|
||||
v3DescInnerLayer := descriptor.InnerLayerCreate(recertifiedIntroPoints)
|
||||
v3Desc := descriptor.HiddenServiceDescriptorV3Create(blindingParam, identityPrivKey, descSigningKey, v3DescInnerLayer, revCounter)
|
||||
|
||||
// TODO stem should probably initialize it itself so that it has balance
|
||||
// between descriptor creation (where this is not inted) and descriptor
|
||||
// parsing (where this is inited)
|
||||
v3Desc.InnerLayer = &v3DescInnerLayer
|
||||
|
||||
// Check max size is within range
|
||||
if len(v3Desc.String()) > MaxDescriptorSize {
|
||||
logrus.Errorf("Created descriptor is too big (%d bytes [max 50000]). Consider relaxing the number of introduction points included in a descriptor (see NIntrosWanted)", len(v3Desc.String()))
|
||||
return nil, ErrBadDescriptor
|
||||
}
|
||||
|
||||
d.onionAddress = onionAddress
|
||||
d.v3Desc = v3Desc
|
||||
d.introSet = NewIntroductionPointSetV3([][]descriptor.IntroductionPointV3{d.v3Desc.InnerLayer.IntroductionPoints})
|
||||
|
||||
return d, nil
|
||||
}
|
||||
|
||||
// MaxDescriptorSize Max descriptor size (in bytes) (see hs_cache_get_max_descriptor_size() in
|
||||
// little-t-tor)
|
||||
const MaxDescriptorSize = 50000
|
||||
|
||||
func (d *OBDescriptor) setLastPublishAttemptTs(lastPublishAttemptTs time.Time) {
|
||||
d.lastPublishAttemptTs = &lastPublishAttemptTs
|
||||
}
|
||||
|
||||
func (d *OBDescriptor) setLastUploadTs(lastUploadTs time.Time) {
|
||||
d.lastUploadTs = &lastUploadTs
|
||||
}
|
||||
|
||||
func (d *OBDescriptor) setResponsibleHsdirs(responsibleHsdirs []string) {
|
||||
d.responsibleHsdirs = responsibleHsdirs
|
||||
}
|
||||
|
||||
// Get the revision counter using the order-preserving-encryption scheme from
|
||||
// rend-spec-v3.txt section F.2.
|
||||
func (d *OBDescriptor) getRevisionCounter(identityPrivKey gobpk.PrivateKey, isFirstDesc bool) int64 {
|
||||
now := btime.Clock.Now().Unix()
|
||||
|
||||
// TODO: Mention that this is done with the private key instead of the blinded priv key
|
||||
// this means that this won't cooperate with normal tor
|
||||
privkeyBytes := identityPrivKey.Seed()
|
||||
|
||||
var srvStart int64
|
||||
if isFirstDesc {
|
||||
srvStart = d.consensus.GetStartTimeOfPreviousSrvRun()
|
||||
} else {
|
||||
srvStart = d.consensus.GetStartTimeOfCurrentSrvRun()
|
||||
}
|
||||
|
||||
opeResult, secondsSinceSrvStart := getRevisionCounterDet(privkeyBytes, now, srvStart)
|
||||
logrus.Debugf("Rev counter for descriptor (FirstDesc %t) (SRV secs %d, OPE %d)", isFirstDesc, secondsSinceSrvStart, opeResult)
|
||||
return opeResult
|
||||
}
|
||||
|
||||
func getRevisionCounterDet(privkeyBytes []byte, now, srvStart int64) (opeResult int64, secondsSinceSrvStart int64) {
|
||||
cipherKeyTmp := sha3.Sum256([]byte("rev-counter-generation" + string(privkeyBytes))) // good
|
||||
cipherKey := cipherKeyTmp[:]
|
||||
|
||||
secondsSinceSrvStart = now - srvStart
|
||||
// This must be strictly positive
|
||||
secondsSinceSrvStart += 1
|
||||
|
||||
iv := make([]byte, 16)
|
||||
block, _ := aes.NewCipher(cipherKey)
|
||||
stream := cipher.NewCTR(block, iv)
|
||||
getOpeSchemeWords := func() int64 {
|
||||
v := make([]byte, 16)
|
||||
stream.XORKeyStream(v, []byte("\x00\x00"))
|
||||
return int64(v[0]) + 256*int64(v[1]) + 1
|
||||
}
|
||||
|
||||
for i := int64(0); i < secondsSinceSrvStart; i++ {
|
||||
opeResult += getOpeSchemeWords()
|
||||
}
|
||||
|
||||
return opeResult, secondsSinceSrvStart
|
||||
}
|
||||
|
||||
func (d *OBDescriptor) recertifyIntroPoint(introPoint descriptor.IntroductionPointV3, descriptorSigningKey ed25519.PrivateKey) descriptor.IntroductionPointV3 {
|
||||
originalAuthKeyCert := introPoint.AuthKeyCert
|
||||
originalEncKeyCert := introPoint.EncKeyCert
|
||||
|
||||
// We have already removed all the intros with legacy keys. Make sure that
|
||||
// no legacy intros sneaks up on us, becausey they would result in
|
||||
// unparseable descriptors if we don't recertify them (and we won't).
|
||||
// assert(not intro_point.legacy_key_cert)
|
||||
|
||||
// Get all the certs we need to recertify
|
||||
// [we need to use the _replace method of namedtuples because there is no
|
||||
// setter for those attributes due to the way stem sets those fields. If we
|
||||
// attempt to normally replace the attributes we get the following
|
||||
// exception: AttributeError: can't set attribute]
|
||||
introPoint.AuthKeyCert = d.recertifyEdCertificate(originalAuthKeyCert, descriptorSigningKey)
|
||||
introPoint.EncKeyCert = d.recertifyEdCertificate(originalEncKeyCert, descriptorSigningKey)
|
||||
introPoint.AuthKeyCertRaw = introPoint.AuthKeyCert.ToBase64()
|
||||
introPoint.EncKeyCertRaw = introPoint.EncKeyCert.ToBase64()
|
||||
recertifiedIntroPoint := introPoint
|
||||
|
||||
return recertifiedIntroPoint
|
||||
}
|
||||
|
||||
// Recertify an HSv3 intro point certificate using the new descriptor signing
|
||||
// key so that it can be accepted as part of a new descriptor.
|
||||
// "Recertifying" means taking the certified key and signing it with a new
|
||||
// key.
|
||||
// Return the new certificate.
|
||||
func (d *OBDescriptor) recertifyEdCertificate(edCert descriptor.Ed25519CertificateV1, descriptorSigningKey ed25519.PrivateKey) descriptor.Ed25519CertificateV1 {
|
||||
return recertifyEdCertificate(edCert, descriptorSigningKey)
|
||||
}
|
||||
|
||||
func recertifyEdCertificate(edCert descriptor.Ed25519CertificateV1, descriptorSigningKey ed25519.PrivateKey) descriptor.Ed25519CertificateV1 {
|
||||
extensions := []descriptor.Ed25519Extension{descriptor.NewEd25519Extension(descriptor.HasSigningKey, 0, descriptorSigningKey.Public().(ed25519.PublicKey))}
|
||||
newCert := descriptor.NewEd25519CertificateV1(edCert.Typ, &edCert.Expiration, edCert.KeyType, edCert.Key, extensions, descriptorSigningKey, nil)
|
||||
return newCert
|
||||
}
|
@ -0,0 +1,45 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"crypto/x509"
|
||||
"encoding/base64"
|
||||
"encoding/pem"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"gobalance/pkg/stem/descriptor"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestRecertify(t *testing.T) {
|
||||
signingKeyPem := `-----BEGIN PRIVATE KEY-----
|
||||
MC4CAQAwBQYDK2VwBCIEIOcEHVwEY9iXpRtgQ9V3gfRPxWnVLueY911dGZDmLsE5
|
||||
-----END PRIVATE KEY-----`
|
||||
in := `-----BEGIN ED25519 CERT-----
|
||||
AQkABvnyAeKc+JWLUCqeZ0PeYQMLB/s1x78MnHbaVJEJRydNiS4MAQAgBABcfN7F
|
||||
QCPKVVMMIsn/OMg/XEQjOhfiqBB7DDU36l7dRyLU9kxujPUIBRUN229MYnIZE7iC
|
||||
Bbtp5EM7G8R6GeX63anXSwcgldZJMa3hTq4QqhJf92nIOWakmAh9N++z+wo=
|
||||
-----END ED25519 CERT-----`
|
||||
expected := `-----BEGIN ED25519 CERT-----
|
||||
AQkABvnyAeKc+JWLUCqeZ0PeYQMLB/s1x78MnHbaVJEJRydNiS4MAQAgBADpdmL5
|
||||
jB9FTH/efQdCjogJa4F2/Xh9qJNiWmKWQYHdFB0b6xL7WctQFkBPWX0E+wyBjN+s
|
||||
kcA5N/9MA4vWHYTeR2NI10q48FfC/A3iXu1W9f+vaVhYGr2rsgWmqt86Ngc=
|
||||
-----END ED25519 CERT-----`
|
||||
|
||||
block, _ := pem.Decode([]byte(signingKeyPem))
|
||||
key, _ := x509.ParsePKCS8PrivateKey(block.Bytes)
|
||||
descriptorSigningKey := key.(ed25519.PrivateKey)
|
||||
edCert := descriptor.Ed25519CertificateFromBase64(in)
|
||||
out := recertifyEdCertificate(edCert, descriptorSigningKey)
|
||||
assert.Equal(t, expected, out.ToBase64())
|
||||
}
|
||||
|
||||
func TestGetRevisionCounterDet(t *testing.T) {
|
||||
pk, _ := base64.StdEncoding.DecodeString(`5FPpKghcg2LnAuG8eO1n/+EwYKePXbxl1kFPp+iKbb8=`)
|
||||
now := int64(1645956370)
|
||||
srvStart := int64(1645833600)
|
||||
expected := int64(4033953644)
|
||||
expectedSSS := int64(122771)
|
||||
opeResult, sss := getRevisionCounterDet(pk, now, srvStart)
|
||||
assert.Equal(t, expectedSSS, sss)
|
||||
assert.Equal(t, expected, opeResult)
|
||||
}
|
197
endgamefiles/sourcecode/gobalance/pkg/onionbalance/hashring.go
Normal file
197
endgamefiles/sourcecode/gobalance/pkg/onionbalance/hashring.go
Normal file
@ -0,0 +1,197 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"encoding/base64"
|
||||
"encoding/binary"
|
||||
"github.com/sirupsen/logrus"
|
||||
"golang.org/x/crypto/sha3"
|
||||
"sort"
|
||||
)
|
||||
|
||||
// GetSrvAndTimePeriod return SRV and time period based on current consensus time
|
||||
func GetSrvAndTimePeriod(isFirstDescriptor bool, consensus ConsensusDoc) ([]byte, int64) {
|
||||
validAfter := consensus.ValidAfter.Unix()
|
||||
currentTp := consensus.GetTimePeriodNum(validAfter)
|
||||
previousTp := currentTp - 1
|
||||
nextTp := currentTp + 1
|
||||
// assert(previous_tp > 0)
|
||||
var srv []byte
|
||||
var tp int64
|
||||
var casee int
|
||||
if isFirstDescriptor {
|
||||
if timeBetweenTpAndSrv(validAfter, consensus) {
|
||||
srv = consensus.GetPreviousSrv(previousTp)
|
||||
tp = previousTp
|
||||
casee = 1
|
||||
} else {
|
||||
srv = consensus.GetPreviousSrv(currentTp)
|
||||
tp = currentTp
|
||||
casee = 2
|
||||
}
|
||||
} else {
|
||||
if timeBetweenTpAndSrv(validAfter, consensus) {
|
||||
srv = consensus.GetCurrentSrv(currentTp)
|
||||
tp = currentTp
|
||||
casee = 3
|
||||
} else {
|
||||
srv = consensus.GetCurrentSrv(nextTp)
|
||||
tp = nextTp
|
||||
casee = 4
|
||||
}
|
||||
}
|
||||
srvB64 := base64.StdEncoding.EncodeToString(srv)
|
||||
logrus.Debugf("For valid_after %d we got SRV %s and TP %d (case: #%d)\n", validAfter, srvB64, tp, casee)
|
||||
return srv, tp
|
||||
}
|
||||
|
||||
func timeBetweenTpAndSrv(validAfter int64, consensus ConsensusDoc) bool {
|
||||
srvStartTime := consensus.GetStartTimeOfCurrentSrvRun()
|
||||
tpStartTime := consensus.GetStartTimeOfNextTimePeriod(srvStartTime)
|
||||
if validAfter >= srvStartTime && validAfter < tpStartTime {
|
||||
logrus.Debug("We are between SRV and TP")
|
||||
return false
|
||||
}
|
||||
logrus.Debugf("We are between TP and SRV (valid_after: %d, srv_start_time: %d -> tp_start_time: %d)\n", validAfter, srvStartTime, tpStartTime)
|
||||
return true
|
||||
}
|
||||
|
||||
func GetResponsibleHsdirs(blindedPubkey ed25519.PublicKey, isFirstDescriptor bool, consensus *Consensus) ([]string, error) {
|
||||
p := Params()
|
||||
responsibleHsdirs := make([]string, 0)
|
||||
|
||||
// dictionary { <node hsdir index> : Node , .... }
|
||||
nodeHashRing := getHashRingForDescriptor(isFirstDescriptor, consensus)
|
||||
if len(nodeHashRing) == 0 {
|
||||
return nil, ErrEmptyHashRing
|
||||
}
|
||||
|
||||
sortedHashRingList := make([]string, 0)
|
||||
|
||||
for k := range nodeHashRing {
|
||||
sortedHashRingList = append(sortedHashRingList, k)
|
||||
}
|
||||
sort.Slice(sortedHashRingList, func(i, j int) bool {
|
||||
return sortedHashRingList[i] < sortedHashRingList[j]
|
||||
})
|
||||
|
||||
logrus.Infof("Initialized hash ring of size %d (blinded key: %s)", len(nodeHashRing), base64.StdEncoding.EncodeToString(blindedPubkey))
|
||||
|
||||
for replicaNum := 1; replicaNum < p.HsdirNReplicas()+1; replicaNum++ {
|
||||
// The HSDirs that we are going to store this replica in
|
||||
replicaStoreHsdirs := make([]string, 0)
|
||||
|
||||
hiddenServiceIndex := getHiddenServiceIndex(blindedPubkey, replicaNum, isFirstDescriptor, consensus)
|
||||
|
||||
// Find position of descriptor ID in the HSDir list
|
||||
index := sort.SearchStrings(sortedHashRingList, string(hiddenServiceIndex))
|
||||
|
||||
logrus.Infof("\t Tried with HS index %x got position %d", hiddenServiceIndex, index)
|
||||
|
||||
for len(replicaStoreHsdirs) < p.HsdirSpreadStore() {
|
||||
var hsdirKey string
|
||||
if index < len(sortedHashRingList) {
|
||||
hsdirKey = sortedHashRingList[index]
|
||||
index += 1
|
||||
} else {
|
||||
// Wrap around when we reach the end of the HSDir list
|
||||
index = 0
|
||||
hsdirKey = sortedHashRingList[index]
|
||||
}
|
||||
hsdirNode := nodeHashRing[hsdirKey]
|
||||
|
||||
// Check if we have already added this node to this
|
||||
// replica. This should never happen on the real network but
|
||||
// might happen in small testnets like chutney!
|
||||
found := false
|
||||
for _, el := range replicaStoreHsdirs {
|
||||
if el == string(hsdirNode.GetHexFingerprint()) {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if found {
|
||||
logrus.Debug("Ignoring already added HSDir to this replica!")
|
||||
break
|
||||
}
|
||||
|
||||
// Check if we have already added this node to the responsible
|
||||
// HSDirs. This can happen in the second replica, and we should
|
||||
// skip the node
|
||||
found = false
|
||||
for _, el := range responsibleHsdirs {
|
||||
if el == string(hsdirNode.GetHexFingerprint()) {
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if found {
|
||||
logrus.Debug("Ignoring already added HSDir!")
|
||||
continue
|
||||
}
|
||||
|
||||
logrus.Debugf("%d: %s: %x", index, hsdirNode.GetHexFingerprint(), hsdirKey)
|
||||
|
||||
replicaStoreHsdirs = append(replicaStoreHsdirs, string(hsdirNode.GetHexFingerprint()))
|
||||
}
|
||||
|
||||
responsibleHsdirs = append(responsibleHsdirs, replicaStoreHsdirs...)
|
||||
}
|
||||
|
||||
logrus.Debugf("Amount of Responsible HSDIR: %d", len(responsibleHsdirs))
|
||||
if len(responsibleHsdirs) != p.HsdirNReplicas()*p.HsdirSpreadStore() {
|
||||
logrus.Panicf("Got the wrong number of responsible HSDirs: %d. Aborting", len(responsibleHsdirs))
|
||||
}
|
||||
|
||||
//For responsible HSDIR splitting
|
||||
start := p.DirStart()
|
||||
if start >= 1 {
|
||||
logrus.Debugf("[DIRSPLIT] RAN SPLIT!")
|
||||
end := p.DirEnd()
|
||||
start -= 1
|
||||
end -= 1
|
||||
if start >= 0 && end < len(responsibleHsdirs) && start <= end {
|
||||
responsibleHsdirs = responsibleHsdirs[start : end+1]
|
||||
}
|
||||
}
|
||||
|
||||
p.SetAdaptHSDirCount(int64(len(responsibleHsdirs)))
|
||||
|
||||
return responsibleHsdirs, nil
|
||||
}
|
||||
|
||||
func getHiddenServiceIndex(blindedPubkey ed25519.PublicKey, replicaNum int, isFirstDescriptor bool, consensus *Consensus) []byte {
|
||||
periodLength := consensus.Consensus().GetTimePeriodLength()
|
||||
replicaNumInt8 := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(replicaNumInt8[len(replicaNumInt8)-8:], uint64(replicaNum))
|
||||
periodLengthInt8 := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(periodLengthInt8[len(periodLengthInt8)-8:], uint64(periodLength))
|
||||
_, timePeriodNum := GetSrvAndTimePeriod(isFirstDescriptor, *consensus.Consensus())
|
||||
logrus.Infof("Getting HS index with TP#%d for %t descriptor (%d replica) ", timePeriodNum, isFirstDescriptor, replicaNum)
|
||||
periodNumInt8 := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(periodNumInt8[len(periodNumInt8)-8:], uint64(timePeriodNum))
|
||||
|
||||
hashBody := "store-at-idx" + string(blindedPubkey) + string(replicaNumInt8) + string(periodLengthInt8) + string(periodNumInt8)
|
||||
|
||||
hsIndex := sha3.Sum256([]byte(hashBody))
|
||||
|
||||
return hsIndex[:]
|
||||
}
|
||||
|
||||
func getHashRingForDescriptor(isFirstDescriptor bool, consensus *Consensus) map[string]*TorNode {
|
||||
nodeHashRing := make(map[string]*TorNode)
|
||||
srv, timePeriodNum := GetSrvAndTimePeriod(isFirstDescriptor, *consensus.Consensus())
|
||||
logrus.Infof("Using srv %x and TP#%d (%t descriptor)", srv, timePeriodNum, isFirstDescriptor)
|
||||
for _, node := range consensus.GetNodes() {
|
||||
hsdirIndex, err := node.GetHsdirIndex(srv, timePeriodNum, consensus)
|
||||
if err != nil {
|
||||
if err == ErrNoHSDir || err == ErrNoEd25519Identity {
|
||||
logrus.Debugf("Could not find ed25519 for node %s (%s)", node.getRouterstatus().Fingerprint, err.Error())
|
||||
continue
|
||||
}
|
||||
}
|
||||
logrus.Debugf("%t: Node: %s, index: %x", isFirstDescriptor, node.GetHexFingerprint(), hsdirIndex)
|
||||
nodeHashRing[string(hsdirIndex)] = node
|
||||
}
|
||||
return nodeHashRing
|
||||
}
|
@ -0,0 +1,9 @@
|
||||
package ext
|
||||
|
||||
import "crypto/ed25519"
|
||||
|
||||
func PublickeyFromESK(h []byte) ed25519.PublicKey {
|
||||
a := decodeInt(h[:32])
|
||||
A := scalarmult(bB, a)
|
||||
return encodepoint(A)
|
||||
}
|
@ -0,0 +1,133 @@
|
||||
package ext
|
||||
|
||||
import "math/big"
|
||||
|
||||
var b = 256
|
||||
var by = biMul(bi(4), inv(bi(5)))
|
||||
var bx = xrecover(by)
|
||||
var q = biSub(biExp(bi(2), bi(255)), bi(19))
|
||||
var bB = []*big.Int{biMod(bx, q), biMod(by, q)}
|
||||
var I = expmod(bi(2), biDiv(biSub(q, bi(1)), bi(4)), q)
|
||||
var d = bi(0).Mul(bi(-121665), inv(bi(121666)))
|
||||
|
||||
func encodepoint(P []*big.Int) []byte {
|
||||
x := P[0]
|
||||
y := P[1]
|
||||
bits := make([]uint8, 0)
|
||||
for i := 0; i < b-1; i++ {
|
||||
bits = append(bits, uint8(biAnd(biRsh(y, uint(i)), bi(1)).Int64()))
|
||||
}
|
||||
bits = append(bits, uint8(biAnd(x, bi(1)).Int64()))
|
||||
by := make([]uint8, 0)
|
||||
for i := 0; i < b/8; i++ {
|
||||
sum := uint8(0)
|
||||
for j := 0; j < 8; j++ {
|
||||
sum += bits[i*8+j] << j
|
||||
}
|
||||
by = append(by, sum)
|
||||
}
|
||||
return by
|
||||
}
|
||||
|
||||
func decodeInt(s []uint8) *big.Int {
|
||||
sum := bi(0)
|
||||
for i := 0; i < 256; i++ {
|
||||
e := biExp(bi(2), bi(int64(i)))
|
||||
m := bi(int64(Bit(s, int64(i))))
|
||||
sum = sum.Add(sum, biMul(e, m))
|
||||
}
|
||||
return sum
|
||||
}
|
||||
|
||||
func scalarmult(P []*big.Int, e *big.Int) []*big.Int {
|
||||
if e.Cmp(bi(0)) == 0 {
|
||||
return []*big.Int{bi(0), bi(1)}
|
||||
}
|
||||
Q := scalarmult(P, biDiv(e, bi(2)))
|
||||
Q = edwards(Q, Q)
|
||||
if e.And(e, bi(1)).Int64() == 1 {
|
||||
Q = edwards(Q, P)
|
||||
}
|
||||
return Q
|
||||
}
|
||||
|
||||
func edwards(P, Q []*big.Int) []*big.Int {
|
||||
x1 := P[0]
|
||||
y1 := P[1]
|
||||
x2 := Q[0]
|
||||
y2 := Q[1]
|
||||
x3 := biMul(biAdd(biMul(x1, y2), biMul(x2, y1)), inv(biAdd(bi(1), biMul(biMul(biMul(biMul(d, x1), x2), y1), y2))))
|
||||
y3 := biMul(biAdd(biMul(y1, y2), biMul(x1, x2)), inv(biSub(bi(1), biMul(biMul(biMul(biMul(d, x1), x2), y1), y2))))
|
||||
return []*big.Int{biMod(x3, q), biMod(y3, q)}
|
||||
}
|
||||
|
||||
func xrecover(y *big.Int) *big.Int {
|
||||
xx := biMul(biSub(biMul(y, y), bi(1)), inv(biAdd(biMul(biMul(d, y), y), bi(1))))
|
||||
x := expmod(xx, biDiv(biAdd(q, bi(3)), bi(8)), q)
|
||||
if biMod(biSub(biMul(x, x), xx), q).Int64() != 0 {
|
||||
x = biMod(biMul(x, I), q)
|
||||
}
|
||||
if biMod(x, bi(2)).Int64() != 0 {
|
||||
x = biSub(q, x)
|
||||
}
|
||||
return x
|
||||
}
|
||||
|
||||
func inv(x *big.Int) *big.Int {
|
||||
return expmod(x, biSub(q, bi(2)), q)
|
||||
}
|
||||
|
||||
func expmod(b, e, m *big.Int) *big.Int {
|
||||
if e.Cmp(bi(0)) == 0 {
|
||||
return bi(1)
|
||||
}
|
||||
t := biMod(biExp(expmod(b, biDiv(e, bi(2)), m), bi(2)), m)
|
||||
if biAnd(e, bi(1)).Int64() == 1 {
|
||||
t = biMod(biMul(t, b), m)
|
||||
}
|
||||
return t
|
||||
}
|
||||
|
||||
func Bit(h []uint8, i int64) uint8 {
|
||||
return (h[i/8] >> (i % 8)) & 1
|
||||
}
|
||||
|
||||
func bi(v int64) *big.Int {
|
||||
return big.NewInt(v)
|
||||
}
|
||||
|
||||
func biExp(a, b *big.Int) *big.Int {
|
||||
return bi(0).Exp(a, b, nil)
|
||||
}
|
||||
|
||||
func biDiv(a, b *big.Int) *big.Int {
|
||||
return bi(0).Div(a, b)
|
||||
}
|
||||
|
||||
func biSub(a, b *big.Int) *big.Int {
|
||||
return bi(0).Sub(a, b)
|
||||
}
|
||||
|
||||
func biAdd(a, b *big.Int) *big.Int {
|
||||
return bi(0).Add(a, b)
|
||||
}
|
||||
|
||||
func biAnd(a, b *big.Int) *big.Int {
|
||||
return bi(0).And(a, b)
|
||||
}
|
||||
|
||||
func biRsh(a *big.Int, b uint) *big.Int {
|
||||
return bi(0).Rsh(a, b)
|
||||
}
|
||||
|
||||
func biLsh(a *big.Int, b uint) *big.Int {
|
||||
return bi(0).Lsh(a, b)
|
||||
}
|
||||
|
||||
func biMul(a, b *big.Int) *big.Int {
|
||||
return bi(0).Mul(a, b)
|
||||
}
|
||||
|
||||
func biMod(a, b *big.Int) *big.Int {
|
||||
return bi(0).Mod(a, b)
|
||||
}
|
158
endgamefiles/sourcecode/gobalance/pkg/onionbalance/instance.go
Normal file
158
endgamefiles/sourcecode/gobalance/pkg/onionbalance/instance.go
Normal file
@ -0,0 +1,158 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
"gobalance/pkg/btime"
|
||||
"gobalance/pkg/stem/descriptor"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Instance struct {
|
||||
controller *Controller
|
||||
OnionAddress string
|
||||
introSetChangedSincePublished bool
|
||||
descriptor *ReceivedDescriptor
|
||||
descriptorMtx sync.RWMutex
|
||||
IntroSetModifiedTimestamp *time.Time
|
||||
}
|
||||
|
||||
func NewInstance(controller *Controller, onionAddress string) *Instance {
|
||||
p := Params()
|
||||
i := &Instance{}
|
||||
i.controller = controller
|
||||
|
||||
if onionAddress != "" {
|
||||
onionAddress = strings.Replace(onionAddress, ".onion", "", 1)
|
||||
}
|
||||
i.OnionAddress = onionAddress
|
||||
|
||||
// Onion address does not contain the '.onion'.
|
||||
logrus.Warnf("Loaded instance %s", onionAddress)
|
||||
|
||||
nInstances := p.NInstances()
|
||||
p.SetNInstances(nInstances + 1)
|
||||
i.introSetChangedSincePublished = false
|
||||
|
||||
i.SetDescriptor(nil)
|
||||
|
||||
// When was the intro set of this instance last modified?
|
||||
i.IntroSetModifiedTimestamp = nil
|
||||
return i
|
||||
}
|
||||
|
||||
func (i *Instance) SetDescriptor(newDescriptor *ReceivedDescriptor) {
|
||||
i.descriptorMtx.Lock()
|
||||
defer i.descriptorMtx.Unlock()
|
||||
i.descriptor = newDescriptor
|
||||
}
|
||||
|
||||
func (i *Instance) GetDescriptor() *ReceivedDescriptor {
|
||||
i.descriptorMtx.RLock()
|
||||
defer i.descriptorMtx.RUnlock()
|
||||
return i.descriptor
|
||||
}
|
||||
|
||||
// Return True if this instance has this onion address
|
||||
func (i *Instance) hasOnionAddress(onionAddress string) bool {
|
||||
// Strip the ".onion" part of the address if it exists since some
|
||||
// subsystems don't use it (e.g. Tor sometimes omits it from control
|
||||
// port responses)
|
||||
myOnionAddress := strings.TrimSuffix(i.OnionAddress, ".onion")
|
||||
theirOnionAddress := strings.TrimSuffix(onionAddress, ".onion")
|
||||
|
||||
return myOnionAddress == theirOnionAddress
|
||||
}
|
||||
|
||||
// FetchDescriptor try fetch a fresh descriptor for this service instance from the HSDirs
|
||||
func (i *Instance) FetchDescriptor() error {
|
||||
logrus.Debugf("Trying to fetch a descriptor for instance %s.onion.", i.OnionAddress)
|
||||
return i.controller.GetHiddenServiceDescriptor(i.OnionAddress)
|
||||
}
|
||||
|
||||
var ErrInstanceHasNoDescriptor = errors.New("InstanceHasNoDescriptor")
|
||||
var ErrInstanceIsOffline = errors.New("InstanceIsOffline")
|
||||
|
||||
// GetIntrosForPublish get a list of stem.descriptor.IntroductionPointV3 objects for this descriptor
|
||||
// Raise :InstanceHasNoDescriptor: if there is no descriptor for this instance
|
||||
// Raise :InstanceIsOffline: if the instance is offline.
|
||||
func (i *Instance) GetIntrosForPublish() ([]descriptor.IntroductionPointV3, error) {
|
||||
p := Params()
|
||||
instDescriptor := i.GetDescriptor()
|
||||
if instDescriptor == nil {
|
||||
adaptDown := p.AdaptDown()
|
||||
p.SetAdaptDown(adaptDown + 1)
|
||||
adaptDownNoDescriptor := p.AdaptDownNoDescriptor()
|
||||
p.SetAdaptDownNoDescriptor(adaptDownNoDescriptor + 1)
|
||||
return nil, ErrInstanceHasNoDescriptor
|
||||
}
|
||||
if instDescriptor.IsOld() {
|
||||
adaptDown := p.AdaptDown()
|
||||
p.SetAdaptDown(adaptDown + 1)
|
||||
|
||||
adaptDownInstanceOld := p.AdaptDownInstanceOld()
|
||||
p.SetAdaptDownInstanceOld(adaptDownInstanceOld + 1)
|
||||
return nil, ErrInstanceIsOffline
|
||||
}
|
||||
adaptUp := p.AdaptUp()
|
||||
p.SetAdaptUp(adaptUp + 1)
|
||||
return instDescriptor.GetIntroPoints(), nil
|
||||
}
|
||||
|
||||
// We received a descriptor (with 'descriptor_text') for 'onion_address'.
|
||||
// Register it to this instance.
|
||||
func (i *Instance) registerDescriptor(descriptorText, onionAddress string) {
|
||||
logrus.Debugf("Found instance %s for this new descriptor!", i.OnionAddress)
|
||||
p := Params()
|
||||
|
||||
if onionAddress != i.OnionAddress {
|
||||
panic("onion_address != i.OnionAddress")
|
||||
}
|
||||
|
||||
// Parse descriptor. If it parsed correctly, we know that this
|
||||
// descriptor is truly for this instance (since the onion address
|
||||
// matches)
|
||||
newDescriptor, err := NewReceivedDescriptor(descriptorText, onionAddress)
|
||||
if err != nil {
|
||||
if err == ErrBadDescriptor {
|
||||
logrus.Warningf("Received bad descriptor for %s. Ignoring.", i.OnionAddress)
|
||||
return
|
||||
}
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Before replacing the current descriptor with this one, check if the
|
||||
// introduction point set changed:
|
||||
|
||||
// If this is the first descriptor for this instance, the intro point set changed
|
||||
|
||||
if i.GetDescriptor() == nil {
|
||||
logrus.Infof("This is the first time we seen a descriptor for instance %s!", i.OnionAddress)
|
||||
tmp := btime.Clock.Now().UTC()
|
||||
i.IntroSetModifiedTimestamp = &tmp
|
||||
i.SetDescriptor(newDescriptor)
|
||||
return
|
||||
}
|
||||
|
||||
if i.GetDescriptor() == nil {
|
||||
panic("i.descriptor == nil")
|
||||
}
|
||||
if newDescriptor.introSet.Len() == 0 {
|
||||
panic("new_descriptor.introSet.Len() == 0")
|
||||
}
|
||||
|
||||
// We already have a descriptor but this is a new one. Check the intro points!
|
||||
if !newDescriptor.introSet.Equals(*i.GetDescriptor().introSet) {
|
||||
logrus.Infof("We got a new descriptor for instance %s and the intro set changed!", i.OnionAddress)
|
||||
tmp := btime.Clock.Now().UTC()
|
||||
i.IntroSetModifiedTimestamp = &tmp
|
||||
adaptIntroChanged := p.AdaptIntroChanged()
|
||||
p.SetAdaptIntroChanged(adaptIntroChanged + 1)
|
||||
} else {
|
||||
logrus.Infof("We got a new descriptor for instance %s but the intro set did not change.", i.OnionAddress)
|
||||
}
|
||||
i.SetDescriptor(newDescriptor)
|
||||
|
||||
}
|
336
endgamefiles/sourcecode/gobalance/pkg/onionbalance/manager.go
Normal file
336
endgamefiles/sourcecode/gobalance/pkg/onionbalance/manager.go
Normal file
@ -0,0 +1,336 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"github.com/sirupsen/logrus"
|
||||
"github.com/urfave/cli/v2"
|
||||
"math/rand"
|
||||
"os"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"time"
|
||||
)
|
||||
|
||||
func loadDefaults() {
|
||||
p := Params()
|
||||
// InitialCallbackDelay How long to wait for onionbalance to bootstrap before starting periodic
|
||||
// events (in nanoseconds). 10 base plus 2 seconds here for every single front you have. (10 instances = 10 + (10X2) = 30) times 1 billion.
|
||||
p.SetInitialCallbackDelay(75 * 1000000000) //time is in nanoseconds
|
||||
// FetchDescriptorFrequency Every how often we should be fetching instance descriptors (in seconds)
|
||||
p.SetFetchDescriptorFrequency(20 * 1000000000)
|
||||
// PublishDescriptorCheckFrequency Every how often we should be checking whether we should publish our frontend
|
||||
// descriptor (in nanoseconds). Triggering this callback doesn't mean we will actually upload a descriptor.
|
||||
// We only upload a descriptor if it has expired, the intro points have changed, etc. Default
|
||||
p.SetPublishDescriptorCheckFrequency(30 * 1000000000)
|
||||
// FrontendDescriptorLifetime How long should we keep a frontend descriptor before we expire it (in
|
||||
// nanoseconds)?
|
||||
p.SetFrontendDescriptorLifetime(40 * 1000000000)
|
||||
// InstanceDescriptorTooOld If we last received a descriptor for this instance more than
|
||||
// INSTANCE_DESCRIPTOR_TOO_OLD seconds ago, consider the instance to be down.
|
||||
p.SetInstanceDescriptorTooOld(120 * 1000000000)
|
||||
// HsdirNReplicas Number of replicas per descriptor (generally only use 2!)
|
||||
p.SetHsdirNReplicas(2)
|
||||
// NIntrosPerInstance How many intros should we use from each instance in the final frontend
|
||||
// descriptor? Default 2 but we use 1 here.
|
||||
p.SetNIntrosPerInstance(1)
|
||||
// NIntrosWanted The amount of introduction points wanted for each individual descriptor
|
||||
p.SetNIntrosWanted(20)
|
||||
// NEWNYM is a tor control port command which clears the descriptors. Tor has a rate limit on this to about 8 seconds.
|
||||
// In the event that changes this variable can be adjusted. Otherwise, don't touch.
|
||||
p.SetNewnymSleep(8 * time.Second)
|
||||
|
||||
// Below is the adaptive configuration area. Don't touch these!
|
||||
p.SetAdaptForcePublish(1)
|
||||
p.SetAdaptDistinctDescriptors(1)
|
||||
}
|
||||
|
||||
// Main This is the entry point of v3 functionality.
|
||||
// Initialize onionbalance, schedule future jobs and let the scheduler do its thing.
|
||||
func Main(c *cli.Context) {
|
||||
loadDefaults()
|
||||
p := Params()
|
||||
if p.NIntrosWanted() > 20 {
|
||||
logrus.Fatal("You need to reduce the NIntrosWanted param value to 20 or below. " +
|
||||
"While it's possible to push more than 20 introduction points; the Tor clients, " +
|
||||
"at this time, will reject the descriptor. See tor's HS_CONFIG_V3_MAX_INTRO_POINTS in hs_config.h and function " +
|
||||
"desc_decode_encrypted_v3 in hs_descriptor.c")
|
||||
}
|
||||
p.SetAdaptEnabled(c.Bool("adaptive"))
|
||||
p.SetAdaptStrict(c.Bool("strict"))
|
||||
p.SetTightTimings(c.Bool("tight"))
|
||||
config := c.String("config")
|
||||
ip := c.String("ip")
|
||||
port := c.Int("port")
|
||||
quick := c.Bool("quick")
|
||||
torPassword := c.String("torPassword")
|
||||
start, end, err := parseRange(c.String("dirsplit"))
|
||||
if err != nil {
|
||||
logrus.Errorf("Your dirsplit value is invalid! Error: " + err.Error())
|
||||
}
|
||||
p.SetDirStart(start)
|
||||
p.SetDirEnd(end)
|
||||
MyOnionBalance := OnionBalance()
|
||||
if err := MyOnionBalance.InitSubsystems(InitSubsystemsParams{
|
||||
ConfigPath: config,
|
||||
IP: ip,
|
||||
Port: port,
|
||||
TorPassword: torPassword,
|
||||
}); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
initScheduler(quick)
|
||||
}
|
||||
|
||||
func initScheduler(quick bool) {
|
||||
p := Params()
|
||||
instance := OnionBalance()
|
||||
|
||||
// Tell Tor to be active and ready.
|
||||
go torActive(instance.Controller())
|
||||
|
||||
//Check if Tor has live consensus before doing anything.
|
||||
if !instance.Consensus().IsLive() {
|
||||
logrus.Fatal("No live consensus. Wait for Tor to grab the consensus and try again.")
|
||||
}
|
||||
|
||||
if p.AdaptEnabled() {
|
||||
adaptiveStart(*instance)
|
||||
} else {
|
||||
instance.FetchInstanceDescriptors()
|
||||
// Quick is a hack to quickly deploy a new descriptor. Used to fix a suck descriptor.
|
||||
if quick {
|
||||
time.Sleep(5 * time.Second)
|
||||
} else {
|
||||
time.Sleep(time.Duration(p.InitialCallbackDelay()))
|
||||
}
|
||||
instance.PublishAllDescriptors()
|
||||
}
|
||||
|
||||
rand.Seed(time.Now().UnixNano())
|
||||
|
||||
//individual async channel threads for both fetching and publishing descriptors.
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-time.After(time.Duration(p.FetchDescriptorFrequency())):
|
||||
case <-p.FetchChannel:
|
||||
continue
|
||||
}
|
||||
run := adaptFetch()
|
||||
if run {
|
||||
//variate timings to reduce correlation attacks
|
||||
rand.Seed(time.Now().UnixNano())
|
||||
millisecond := time.Duration(rand.Intn(2001)) * time.Millisecond
|
||||
time.Sleep(millisecond)
|
||||
instance.FetchInstanceDescriptors()
|
||||
}
|
||||
|
||||
}
|
||||
}()
|
||||
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-time.After(time.Duration(p.PublishDescriptorCheckFrequency())):
|
||||
case <-p.PublishChannel:
|
||||
continue
|
||||
}
|
||||
|
||||
run := adaptPublish()
|
||||
if run {
|
||||
//variate timings to reduce correlation attacks
|
||||
rand.Seed(time.Now().UnixNano())
|
||||
millisecond := time.Duration(rand.Intn(2001)) * time.Millisecond
|
||||
time.Sleep(millisecond)
|
||||
instance.PublishAllDescriptors()
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func torActive(instance *Controller) {
|
||||
_, err := instance.Signal("ACTIVE")
|
||||
if err != nil {
|
||||
logrus.Panicf("Sending 'Active' signal failed. Check if your Tor control process is still alive and able to be connected to!")
|
||||
}
|
||||
time.Sleep(5 * time.Second)
|
||||
}
|
||||
|
||||
func adaptiveStart(instance Onionbalance) {
|
||||
p := Params()
|
||||
logrus.Infof("[ADAPTIVE] Waiting for %d instance descriptors.", p.NInstances())
|
||||
p.SetAdaptWgEnabled(true)
|
||||
instance.FetchInstanceDescriptors()
|
||||
p.AdaptWg().Wait()
|
||||
//need to get the channel and see how many instances have returned within the InitialCallbackDelay time. Hoping for all of them. Warn if not.
|
||||
adaptStartTime := p.AdaptStartTime()
|
||||
p.SetAdaptDelay(time.Since(time.Unix(adaptStartTime, 0)).Nanoseconds())
|
||||
logrus.Info("[ADAPTIVE] Adaptive Configured! It took ", p.AdaptDelay()/1000000000, " seconds to get all descriptors. Optimizing performance!")
|
||||
if p.AdaptStrict() {
|
||||
strictTest(p.AdaptDelay())
|
||||
}
|
||||
//Prevent Waitgroup Recounting. Sanity check as well.
|
||||
p.SetAdaptWgEnabled(false)
|
||||
|
||||
logrus.Info("[ADAPTIVE] Adapting to network and instance conditions...")
|
||||
|
||||
//Make sure that newnym has a chance to clear descriptors
|
||||
if p.AdaptDelay() < 8000000000 { //8 seconds
|
||||
p.SetAdaptDelay(80000000000)
|
||||
}
|
||||
|
||||
adaptDelay := p.AdaptDelay()
|
||||
//We got all the descriptors within this timeframe so should be a good default.
|
||||
p.SetFetchDescriptorFrequency(adaptDelay)
|
||||
//If new descriptors are not received for 5 fetches (2 retries) count them as old.
|
||||
p.SetInstanceDescriptorTooOld(adaptDelay * 5)
|
||||
//Expire a descriptor after two fetches. This is not ideal for large amounts of instances.
|
||||
p.SetFrontendDescriptorLifetime(adaptDelay * 2)
|
||||
//Time the publishing checks with the fetch descriptors. Only publishes if needed.
|
||||
p.SetPublishDescriptorCheckFrequency(adaptDelay / 3)
|
||||
|
||||
adaptFetch()
|
||||
adaptPublish()
|
||||
//force publishing on first start
|
||||
p.SetAdaptIntroChanged(1)
|
||||
}
|
||||
|
||||
// Adaptive Publish
|
||||
//
|
||||
// These functions changes the way onionbalance operates to prioritize introduction rotation
|
||||
// onto the network in the most ideal timings (to increase reachability). It responds to the amount of
|
||||
// active instances and changes the publishing timings to the network in hopes of not overloading
|
||||
// the attached Tor process. The point is to help tune in the default parameters, based on the amount
|
||||
// of instances, so that it maximizes the onion service uptime. It is heavily opinionated and is not
|
||||
// a perfect alternative to manual refinement.
|
||||
// However, it is far better than what the original python implementation does. Aka nothing.
|
||||
|
||||
func adaptPublish() bool {
|
||||
p := Params()
|
||||
if !p.AdaptEnabled() {
|
||||
return true
|
||||
}
|
||||
|
||||
//If there is less than 20 Introduction points active disable distinct descriptors. The benefits of distinct descriptors
|
||||
//are only shown with more than 20 instances. We increase reachability with tighter timings on descriptor
|
||||
//pushes and better introduction point selection.
|
||||
if p.NIntroduction() < 20 {
|
||||
p.SetAdaptDistinctDescriptors(1)
|
||||
p.SetAdaptShuffle(1)
|
||||
//If there has been no change in any introduction point with all instances being active do not proceed.
|
||||
if p.AdaptIntroChanged() == 0 && p.AdaptUp() == p.AdaptCount() {
|
||||
return false
|
||||
}
|
||||
} else {
|
||||
p.SetAdaptDistinctDescriptors(1)
|
||||
//If the number of introduction points are less than the amount it takes to fill all HSDIR descriptors,
|
||||
//configure the push of introduction points to prioritize the freshest descriptors received. Otherwise, treat all
|
||||
//introduction points as equal priority.
|
||||
maxintropoints := p.AdaptHSDirCount() * 20
|
||||
if maxintropoints > p.NIntroduction() {
|
||||
p.SetAdaptShuffle(1)
|
||||
} else {
|
||||
p.SetAdaptShuffle(0)
|
||||
}
|
||||
}
|
||||
|
||||
//ADAPT TIMING ADJUSTMENTS REMOVED (correlation attack potential)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// Adaptive Fetching
|
||||
func adaptFetch() bool {
|
||||
p := Params()
|
||||
if !p.AdaptEnabled() {
|
||||
return true
|
||||
}
|
||||
//warn if some instances are down
|
||||
if p.AdaptCount() != p.NInstances() {
|
||||
if p.AdaptDownNoDescriptor() != 0 {
|
||||
logrus.Infof("[ADAPTIVE] There are %d instances who have no returned Descriptors. If you see this message a lot "+
|
||||
"stop gobalance and remove the offline instances for better performance.", p.AdaptDownNoDescriptor())
|
||||
}
|
||||
|
||||
if p.AdaptDownInstanceOld() != 0 {
|
||||
logrus.Infof("[ADAPTIVE] There are %d instances who have old descriptors. If you see this message a lot "+
|
||||
"stop gobalance, reset tor, and remove the offline instances for better performance.", p.AdaptDownInstanceOld())
|
||||
}
|
||||
}
|
||||
|
||||
//ADAPT TIMING ADJUSTMENTS REMOVED. (correlation attack potential)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// strictTest Returns if adaptive timings are reasonable giving the number of instances. Runs only on start.
|
||||
// Gracefully exits on failure. Configurable with the "strict" cli option. Defaults to true.
|
||||
func strictTest(timings int64) {
|
||||
p := Params()
|
||||
//Check if there are failed services within the config.yaml which returned with no descriptors exit with warning.
|
||||
//Best to clear out the downed instances or wait for their recovery before doing anything else
|
||||
nInstances := p.NInstances()
|
||||
if nInstances < p.AdaptCount() {
|
||||
logrus.Infof("[Strict] Some instances are down at start of this process. Wait for their recovery or remove " +
|
||||
"the downed instances.")
|
||||
if logrus.GetLevel().String() != "debug" {
|
||||
logrus.Infof("[Strict] Set '--verbosity bebug' to see downed instances.")
|
||||
}
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
//Tor has a soft max of 32 general-purpose client circuits that can be pending.
|
||||
//If you go over that value it will wait until some finish. This means having more than 32 fronts will greatly limit
|
||||
//your circuit builds.
|
||||
if nInstances > 32 {
|
||||
logrus.Infof("[Strict] You have over 32 active fronts. Tor has a soft limit of 32 general-purpose pending circuits." +
|
||||
"For the best performance split your fronts and descriptor push over multiple gobalance instances and Tor processes")
|
||||
logrus.Debugf("You have %d active fronts. You want under 32.", nInstances)
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
//From many tests the tolerance for a series of instances on a single Tor process is
|
||||
//a simple base of 10 plus 3 per instance. It takes in account the delay of the Tor network circuit building
|
||||
//The timings here was calculated when the Tor network was under DDOS with extreme latency build issues.
|
||||
//This will be probably inaccurate in times of peace and should be tightened further.
|
||||
maxTimings := (10 + (5 * nInstances)) * 1000000000
|
||||
if maxTimings < timings {
|
||||
logrus.Infof("[Strict] The Tor process is too slow to handle %d instances in current network conditions. "+
|
||||
"Reduce the amount of instances on an individual onionbalance and tor process to pass strict test checks or "+
|
||||
"disable with cli --strict false.", nInstances)
|
||||
logrus.Debugf("strictTimings=%d and reported timings=%d in seconds", maxTimings/1000000000, timings/1000000000)
|
||||
os.Exit(0)
|
||||
}
|
||||
}
|
||||
|
||||
func parseRange(input string) (int, int, error) {
|
||||
if input == "" {
|
||||
return 0, 0, nil
|
||||
}
|
||||
pattern := `^([1-8])(?:-([1-8]))?$`
|
||||
re := regexp.MustCompile(pattern)
|
||||
matches := re.FindStringSubmatch(input)
|
||||
|
||||
if matches == nil {
|
||||
logrus.Errorf("The dirsplit value is invalid. You need to have it within the range of 1-8!")
|
||||
}
|
||||
|
||||
start, err := strconv.Atoi(matches[1])
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
|
||||
end := start
|
||||
if matches[2] != "" {
|
||||
end, err = strconv.Atoi(matches[2])
|
||||
if err != nil {
|
||||
return 0, 0, err
|
||||
}
|
||||
}
|
||||
|
||||
// Make sure the end is greater than or equal to the start
|
||||
if end < start {
|
||||
logrus.Errorf("End number should be greater than or equal to the start number. It's a assending range!")
|
||||
}
|
||||
|
||||
return start, end, nil
|
||||
}
|
@ -0,0 +1,205 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"github.com/sirupsen/logrus"
|
||||
"gobalance/pkg/btime"
|
||||
"gobalance/pkg/clockwork"
|
||||
"gopkg.in/yaml.v3"
|
||||
"math/rand"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
var once sync.Once
|
||||
var inst *Onionbalance
|
||||
|
||||
func OnionBalance() *Onionbalance {
|
||||
once.Do(func() {
|
||||
inst = &Onionbalance{
|
||||
IsTestnet: false,
|
||||
}
|
||||
})
|
||||
return inst
|
||||
}
|
||||
|
||||
type Onionbalance struct {
|
||||
IsTestnet bool
|
||||
configPath string
|
||||
configData ConfigData
|
||||
controller *Controller
|
||||
consensus *Consensus
|
||||
services []*Service
|
||||
servicesMtx sync.RWMutex
|
||||
}
|
||||
|
||||
func (b *Onionbalance) GetServices() []*Service {
|
||||
b.servicesMtx.RLock()
|
||||
defer b.servicesMtx.RUnlock()
|
||||
return b.services
|
||||
}
|
||||
|
||||
func (b *Onionbalance) SetServices(newVal []*Service) {
|
||||
b.servicesMtx.Lock()
|
||||
defer b.servicesMtx.Unlock()
|
||||
b.services = newVal
|
||||
}
|
||||
|
||||
func (b *Onionbalance) Consensus() *Consensus {
|
||||
return b.consensus
|
||||
}
|
||||
|
||||
func (b *Onionbalance) Controller() *Controller {
|
||||
return b.controller
|
||||
}
|
||||
|
||||
type InitSubsystemsParams struct {
|
||||
ConfigPath string
|
||||
IP string
|
||||
Port int
|
||||
Socket string
|
||||
TorPassword string
|
||||
}
|
||||
|
||||
func (b *Onionbalance) InitSubsystems(args InitSubsystemsParams) error {
|
||||
btime.Clock = clockwork.NewRealClock()
|
||||
rand.Seed(time.Now().UnixNano())
|
||||
|
||||
b.configPath, _ = filepath.Abs(args.ConfigPath)
|
||||
b.configData = b.LoadConfigFile()
|
||||
b.IsTestnet = false
|
||||
if b.IsTestnet {
|
||||
logrus.Warn("OnionBalance configured on a testnet!")
|
||||
}
|
||||
b.controller = NewController(args.IP, args.Port, args.TorPassword)
|
||||
b.consensus = NewConsensus(b.controller, true)
|
||||
|
||||
// Initialize our service
|
||||
b.SetServices(b.initializeServicesFromConfigData())
|
||||
|
||||
// Catch interesting events (like receiving descriptors etc.)
|
||||
if err := b.controller.SetEvents(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
logrus.Warnf("OnionBalance initialized (tor version: %s)!", b.controller.GetVersion())
|
||||
logrus.Warn(strings.Repeat("=", 80))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *Onionbalance) initializeServicesFromConfigData() []*Service {
|
||||
services := make([]*Service, 0)
|
||||
|
||||
p := Params()
|
||||
p.SetAdaptWgEnabled(true)
|
||||
for _, svc := range b.configData.Services {
|
||||
services = append(services, NewService(b.consensus, b.controller, svc, b.configPath))
|
||||
}
|
||||
p.SetAdaptWgEnabled(false)
|
||||
return services
|
||||
}
|
||||
|
||||
func (b *Onionbalance) LoadConfigFile() (out ConfigData) {
|
||||
logrus.Infof("Loaded the config file '%s'.", b.configPath)
|
||||
by, err := os.ReadFile(b.configPath)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := yaml.Unmarshal(by, &out); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
logrus.Debugf("OnionBalance config data: %v", out)
|
||||
return
|
||||
}
|
||||
|
||||
// PublishAllDescriptors for each service attempt to publish all descriptors
|
||||
func (b *Onionbalance) PublishAllDescriptors() {
|
||||
logrus.Info("[*] PublishAllDescriptors() called [*]")
|
||||
|
||||
if !b.consensus.IsLive() {
|
||||
logrus.Info("No live consensus. Wait for Tor to grab the consensus and try again.")
|
||||
return
|
||||
}
|
||||
|
||||
for _, svc := range b.GetServices() {
|
||||
svc.PublishDescriptors()
|
||||
}
|
||||
}
|
||||
|
||||
func (b *Onionbalance) FetchInstanceDescriptors() {
|
||||
p := Params()
|
||||
logrus.Info("[*] FetchInstanceDescriptors() called [*]")
|
||||
p.SetNIntroduction(0)
|
||||
p.SetNDescriptors(0)
|
||||
p.SetAdaptHSDirFailureCount(0)
|
||||
p.SetAdaptIntroChanged(0)
|
||||
|
||||
if !b.consensus.IsLive() {
|
||||
logrus.Warn("No live consensus. Wait for Tor to grab the consensus and try again.")
|
||||
return
|
||||
}
|
||||
|
||||
allInstances := b.getAllInstances()
|
||||
|
||||
helperFetchAllInstanceDescriptors(b.controller, allInstances)
|
||||
}
|
||||
|
||||
// Get all instances for all services
|
||||
func (b *Onionbalance) getAllInstances() []*Instance {
|
||||
instances := make([]*Instance, 0)
|
||||
b.servicesMtx.Lock()
|
||||
for _, srv := range b.services {
|
||||
instances = append(instances, srv.GetInstances()...)
|
||||
}
|
||||
b.servicesMtx.Unlock()
|
||||
return instances
|
||||
}
|
||||
|
||||
// Try fetch fresh descriptors for all HS instances
|
||||
func helperFetchAllInstanceDescriptors(ctrl *Controller, instances []*Instance) {
|
||||
logrus.Info("Initiating fetch of descriptors for all service instances.")
|
||||
p := Params()
|
||||
|
||||
for {
|
||||
// Clear Tor descriptor cache before making fetches by sending the NEWNYM singal
|
||||
if _, err := ctrl.Signal("NEWNYM"); err != nil {
|
||||
if err == ErrSocketClosed {
|
||||
logrus.Error("Failed to send NEWNYM signal, socket is closed.")
|
||||
ctrl.ReAuthenticate()
|
||||
continue
|
||||
} else {
|
||||
break
|
||||
}
|
||||
}
|
||||
//TODO: Find a way to check if NEWNYM did in fact clear the descriptors.
|
||||
//Checked to see if there was a way. There isn't. Made it configurable.
|
||||
time.Sleep(p.NewnymSleep())
|
||||
break
|
||||
}
|
||||
|
||||
uniqueInstances := make(map[string]*Instance)
|
||||
for _, inst := range instances {
|
||||
uniqueInstances[inst.OnionAddress] = inst
|
||||
}
|
||||
|
||||
if p.AdaptWgEnabled() {
|
||||
p.SetAdaptStartTime(time.Now().Unix())
|
||||
}
|
||||
|
||||
for _, inst := range uniqueInstances {
|
||||
for {
|
||||
if err := inst.FetchDescriptor(); err != nil {
|
||||
if err == ErrSocketClosed {
|
||||
logrus.Error("Failed to fetch descriptor, socket is closed")
|
||||
ctrl.ReAuthenticate()
|
||||
continue
|
||||
} else {
|
||||
break
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
529
endgamefiles/sourcecode/gobalance/pkg/onionbalance/param.go
Normal file
529
endgamefiles/sourcecode/gobalance/pkg/onionbalance/param.go
Normal file
@ -0,0 +1,529 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
params *Param
|
||||
paramsOnce sync.Once
|
||||
)
|
||||
|
||||
func Params() *Param {
|
||||
paramsOnce.Do(func() {
|
||||
params = new(Param)
|
||||
params.adaptWg = &sync.WaitGroup{}
|
||||
params.FetchChannel = make(chan bool)
|
||||
params.PublishChannel = make(chan bool)
|
||||
})
|
||||
return params
|
||||
}
|
||||
|
||||
type Param struct {
|
||||
sync.Mutex
|
||||
|
||||
initialCallbackDelay int64
|
||||
fetchDescriptorFrequency int64
|
||||
publishDescriptorCheckFrequency int64
|
||||
frontendDescriptorLifetime int64
|
||||
instanceDescriptorTooOld int64
|
||||
hsdirNReplicas int
|
||||
|
||||
hsdirSpreadStore int
|
||||
nIntrosPerInstance int
|
||||
nIntrosWanted int
|
||||
tightTimings bool
|
||||
newnymSleep time.Duration
|
||||
|
||||
// NInstances configures on boot. Don't change default value.
|
||||
nInstances int64
|
||||
nDescriptors int64
|
||||
nIntroduction int64
|
||||
|
||||
adaptEnabled bool
|
||||
adaptStrict bool
|
||||
adaptWg *sync.WaitGroup
|
||||
adaptWgCount int64
|
||||
adaptWgEnabled bool
|
||||
FetchChannel chan bool
|
||||
PublishChannel chan bool
|
||||
|
||||
adaptDistinctDescriptors int64
|
||||
adaptStartTime int64
|
||||
adaptDelay int64
|
||||
adaptHSDirCount int64
|
||||
adaptHSDirFailureCount int64
|
||||
|
||||
adaptCount int64
|
||||
adaptUp int64
|
||||
adaptDown int64
|
||||
adaptDownNoDescriptor int64
|
||||
adaptDownInstanceOld int64
|
||||
adaptIntroChanged int64
|
||||
adaptDescriptorFail int64
|
||||
adaptFetchFail int64
|
||||
|
||||
adaptForcePublish int64
|
||||
|
||||
adaptShuffle int64
|
||||
|
||||
dirStart int
|
||||
dirEnd int
|
||||
}
|
||||
|
||||
func (p *Param) InitialCallbackDelay() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.initialCallbackDelay
|
||||
}
|
||||
|
||||
func (p *Param) SetInitialCallbackDelay(initialCallbackDelay int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.initialCallbackDelay = initialCallbackDelay
|
||||
}
|
||||
|
||||
func (p *Param) FetchDescriptorFrequency() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.fetchDescriptorFrequency
|
||||
}
|
||||
|
||||
func (p *Param) SetFetchDescriptorFrequency(fetchDescriptorFrequency int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.fetchDescriptorFrequency = fetchDescriptorFrequency
|
||||
}
|
||||
|
||||
func (p *Param) PublishDescriptorCheckFrequency() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.publishDescriptorCheckFrequency
|
||||
}
|
||||
|
||||
func (p *Param) SetPublishDescriptorCheckFrequency(publishDescriptorCheckFrequency int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.publishDescriptorCheckFrequency = publishDescriptorCheckFrequency
|
||||
}
|
||||
|
||||
func (p *Param) FrontendDescriptorLifetime() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.frontendDescriptorLifetime
|
||||
}
|
||||
|
||||
func (p *Param) SetFrontendDescriptorLifetime(frontendDescriptorLifetime int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.frontendDescriptorLifetime = frontendDescriptorLifetime
|
||||
}
|
||||
|
||||
func (p *Param) InstanceDescriptorTooOld() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.instanceDescriptorTooOld
|
||||
}
|
||||
|
||||
func (p *Param) SetInstanceDescriptorTooOld(instanceDescriptorTooOld int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.instanceDescriptorTooOld = instanceDescriptorTooOld
|
||||
}
|
||||
|
||||
func (p *Param) HsdirNReplicas() int {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.hsdirNReplicas
|
||||
}
|
||||
|
||||
func (p *Param) SetHsdirNReplicas(hsdirNReplicas int) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.hsdirNReplicas = hsdirNReplicas
|
||||
}
|
||||
|
||||
func (p *Param) HsdirSpreadStore() int {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.hsdirSpreadStore
|
||||
}
|
||||
|
||||
func (p *Param) SetHsdirSpreadStore(hsdirSpreadStore int) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.hsdirSpreadStore = hsdirSpreadStore
|
||||
}
|
||||
|
||||
func (p *Param) NIntrosPerInstance() int {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.nIntrosPerInstance
|
||||
}
|
||||
|
||||
func (p *Param) SetNIntrosPerInstance(nIntrosPerInstance int) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.nIntrosPerInstance = nIntrosPerInstance
|
||||
}
|
||||
|
||||
func (p *Param) NIntrosWanted() int {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.nIntrosWanted
|
||||
}
|
||||
|
||||
func (p *Param) SetNIntrosWanted(nIntrosWanted int) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.nIntrosWanted = nIntrosWanted
|
||||
}
|
||||
|
||||
func (p *Param) TightTimings() bool {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.tightTimings
|
||||
}
|
||||
|
||||
func (p *Param) SetTightTimings(tightTimings bool) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.tightTimings = tightTimings
|
||||
}
|
||||
|
||||
func (p *Param) NewnymSleep() time.Duration {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.newnymSleep
|
||||
}
|
||||
|
||||
func (p *Param) SetNewnymSleep(newnymSleep time.Duration) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.newnymSleep = newnymSleep
|
||||
}
|
||||
|
||||
func (p *Param) NInstances() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.nInstances
|
||||
}
|
||||
|
||||
func (p *Param) SetNInstances(nInstances int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.nInstances = nInstances
|
||||
}
|
||||
|
||||
func (p *Param) NDescriptors() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.nDescriptors
|
||||
}
|
||||
|
||||
func (p *Param) SetNDescriptors(nDescriptors int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.nDescriptors = nDescriptors
|
||||
}
|
||||
|
||||
func (p *Param) NIntroduction() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.nIntroduction
|
||||
}
|
||||
|
||||
func (p *Param) SetNIntroduction(nIntroduction int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.nIntroduction = nIntroduction
|
||||
}
|
||||
|
||||
func (p *Param) AdaptEnabled() bool {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptEnabled
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptEnabled(adaptEnabled bool) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptEnabled = adaptEnabled
|
||||
}
|
||||
|
||||
func (p *Param) AdaptStrict() bool {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptStrict
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptStrict(adaptStrict bool) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptStrict = adaptStrict
|
||||
}
|
||||
|
||||
func (p *Param) AdaptWg() *sync.WaitGroup {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptWg
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptWg(adaptWg *sync.WaitGroup) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptWg = adaptWg
|
||||
}
|
||||
|
||||
func (p *Param) AdaptWgCount() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptWgCount
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptWgCount(adaptWgCount int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptWgCount = adaptWgCount
|
||||
}
|
||||
|
||||
func (p *Param) AdaptWgEnabled() bool {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptWgEnabled
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptWgEnabled(adaptWgEnabled bool) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptWgEnabled = adaptWgEnabled
|
||||
}
|
||||
|
||||
func (p *Param) AdaptDistinctDescriptors() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptDistinctDescriptors
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptDistinctDescriptors(adaptDistinctDescriptors int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptDistinctDescriptors = adaptDistinctDescriptors
|
||||
}
|
||||
|
||||
func (p *Param) AdaptStartTime() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptStartTime
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptStartTime(adaptStartTime int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptStartTime = adaptStartTime
|
||||
}
|
||||
|
||||
func (p *Param) AdaptDelay() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptDelay
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptDelay(adaptDelay int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptDelay = adaptDelay
|
||||
}
|
||||
|
||||
func (p *Param) AdaptHSDirCount() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptHSDirCount
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptHSDirCount(adaptHSDirCount int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptHSDirCount = adaptHSDirCount
|
||||
}
|
||||
|
||||
func (p *Param) AdaptHSDirFailureCount() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptHSDirFailureCount
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptHSDirFailureCount(adaptHSDirFailureCount int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptHSDirFailureCount = adaptHSDirFailureCount
|
||||
}
|
||||
|
||||
func (p *Param) AdaptCount() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptCount
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptCount(adaptCount int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptCount = adaptCount
|
||||
}
|
||||
|
||||
func (p *Param) AdaptUp() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptUp
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptUp(adaptUp int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptUp = adaptUp
|
||||
}
|
||||
|
||||
func (p *Param) AdaptDown() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptDown
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptDown(adaptDown int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptDown = adaptDown
|
||||
}
|
||||
|
||||
func (p *Param) AdaptDownNoDescriptor() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptDownNoDescriptor
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptDownNoDescriptor(adaptDownNoDescriptor int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptDownNoDescriptor = adaptDownNoDescriptor
|
||||
}
|
||||
|
||||
func (p *Param) AdaptDownInstanceOld() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptDownInstanceOld
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptDownInstanceOld(adaptDownInstanceOld int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptDownInstanceOld = adaptDownInstanceOld
|
||||
}
|
||||
|
||||
func (p *Param) AdaptIntroChanged() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptIntroChanged
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptIntroChanged(adaptIntroChanged int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptIntroChanged = adaptIntroChanged
|
||||
}
|
||||
|
||||
func (p *Param) AdaptDescriptorFail() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptDescriptorFail
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptDescriptorFail(adaptDescriptorFail int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptDescriptorFail = adaptDescriptorFail
|
||||
}
|
||||
|
||||
func (p *Param) AdaptFetchFail() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptFetchFail
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptFetchFail(adaptFetchFail int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptFetchFail = adaptFetchFail
|
||||
}
|
||||
|
||||
func (p *Param) AdaptForcePublish() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptForcePublish
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptForcePublish(adaptForcePublish int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptForcePublish = adaptForcePublish
|
||||
}
|
||||
|
||||
func (p *Param) AdaptShuffle() int64 {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.adaptShuffle
|
||||
}
|
||||
|
||||
func (p *Param) SetAdaptShuffle(adaptShuffle int64) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.adaptShuffle = adaptShuffle
|
||||
}
|
||||
|
||||
func (p *Param) DirStart() int {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.dirStart
|
||||
}
|
||||
|
||||
func (p *Param) SetDirStart(dirStart int) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.dirStart = dirStart
|
||||
}
|
||||
|
||||
func (p *Param) DirEnd() int {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
return p.dirEnd
|
||||
}
|
||||
|
||||
func (p *Param) SetDirEnd(dirEnd int) {
|
||||
p.Lock()
|
||||
defer p.Unlock()
|
||||
p.dirEnd = dirEnd
|
||||
}
|
||||
|
||||
//const (
|
||||
// // FrontendDescriptorLifetime How long should we keep a frontend descriptor before we expire it (in
|
||||
// // seconds)?
|
||||
// FrontendDescriptorLifetime = 60 * 60
|
||||
// FrontendDescriptorLifetimeTestnet = 20
|
||||
//
|
||||
// // HsdirNReplicas Number of replicas per descriptor
|
||||
// HsdirNReplicas = 2
|
||||
//
|
||||
// // HsdirSpreadStore How many uploads per replica
|
||||
// // [TODO: Get these from the consensus instead of hardcoded]
|
||||
// HsdirSpreadStore = 4
|
||||
//
|
||||
// // InstanceDescriptorTooOld If we last received a descriptor for this instance more than
|
||||
// // INSTANCE_DESCRIPTOR_TOO_OLD seconds ago, consider the instance to be down.
|
||||
// InstanceDescriptorTooOld = 60 * 60
|
||||
//
|
||||
// // NIntrosPerInstance How many intros should we use from each instance in the final frontend
|
||||
// // descriptor?
|
||||
// // [TODO: This makes no attempt to hide the use of onionbalance. In the future we
|
||||
// // should be smarter and sneakier here.]
|
||||
// NIntrosPerInstance = 2
|
||||
//)
|
643
endgamefiles/sourcecode/gobalance/pkg/onionbalance/service.go
Normal file
643
endgamefiles/sourcecode/gobalance/pkg/onionbalance/service.go
Normal file
@ -0,0 +1,643 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/sirupsen/logrus"
|
||||
"gobalance/pkg/btime"
|
||||
"gobalance/pkg/gobpk"
|
||||
"gobalance/pkg/onionbalance/hs_v3/ext"
|
||||
"gobalance/pkg/stem/descriptor"
|
||||
"gobalance/pkg/stem/util"
|
||||
"math/rand"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Service struct {
|
||||
controller *Controller
|
||||
identityPrivKey gobpk.PrivateKey
|
||||
OnionAddress string
|
||||
Instances []*Instance
|
||||
instancesMtx sync.RWMutex
|
||||
firstDescriptor *OBDescriptor
|
||||
secondDescriptor *OBDescriptor
|
||||
consensus *Consensus
|
||||
}
|
||||
|
||||
// NewService new with 'config_data' straight out of the config file, create the service and its instances.
|
||||
// 'config_path' is the full path to the config file.
|
||||
// Raise ValueError if the config file is not well formatted
|
||||
func NewService(consensus *Consensus, controller *Controller, serviceConfigData ServiceConfig, configPath string) *Service {
|
||||
s := &Service{}
|
||||
s.controller = controller
|
||||
s.consensus = consensus
|
||||
|
||||
// Load private key and onion address from config
|
||||
// (the onion_address also includes the ".onion")
|
||||
s.identityPrivKey, s.OnionAddress = s.loadServiceKeys(serviceConfigData, configPath)
|
||||
|
||||
// Now load up the instances
|
||||
s.SetInstances(s.loadInstances(serviceConfigData))
|
||||
|
||||
// First descriptor for this service (the one we uploaded last)
|
||||
s.firstDescriptor = nil
|
||||
// Second descriptor for this service (the one we uploaded last)
|
||||
s.secondDescriptor = nil
|
||||
|
||||
return s
|
||||
}
|
||||
|
||||
func (s *Service) GetInstances() []*Instance {
|
||||
s.instancesMtx.RLock()
|
||||
defer s.instancesMtx.RUnlock()
|
||||
return s.Instances
|
||||
}
|
||||
|
||||
func (s *Service) SetInstances(newInstances []*Instance) {
|
||||
s.instancesMtx.Lock()
|
||||
defer s.instancesMtx.Unlock()
|
||||
s.Instances = newInstances
|
||||
}
|
||||
|
||||
func (s *Service) loadServiceKeys(serviceConfigData ServiceConfig, configPath string) (gobpk.PrivateKey, string) {
|
||||
// First of all let's load up the private key
|
||||
keyFname := serviceConfigData.Key
|
||||
configDirectory := filepath.Dir(configPath)
|
||||
if !filepath.IsAbs(keyFname) {
|
||||
keyFname = filepath.Join(configDirectory, keyFname)
|
||||
}
|
||||
pemKeyBytes, err := os.ReadFile(keyFname)
|
||||
if err != nil {
|
||||
logrus.Fatalf("Unable to read service private key file ('%v')", err)
|
||||
}
|
||||
var identityPrivKey ed25519.PrivateKey
|
||||
blocks, rest := pem.Decode(pemKeyBytes)
|
||||
if len(rest) == 0 {
|
||||
identityPrivKey = ed25519.NewKeyFromSeed(blocks.Bytes[16 : 16+32])
|
||||
}
|
||||
isPrivKeyInTorFormat := false
|
||||
var privKey gobpk.PrivateKey
|
||||
if identityPrivKey == nil {
|
||||
identityPrivKey = LoadTorKeyFromDisk(pemKeyBytes)
|
||||
isPrivKeyInTorFormat = true
|
||||
privKey = gobpk.New(identityPrivKey, true)
|
||||
} else {
|
||||
privKey = gobpk.New(identityPrivKey, false)
|
||||
}
|
||||
|
||||
// Get onion address
|
||||
identityPubKey := identityPrivKey.Public().(ed25519.PublicKey)
|
||||
|
||||
onionAddress := descriptor.AddressFromIdentityKey(identityPubKey)
|
||||
if isPrivKeyInTorFormat {
|
||||
pub := ext.PublickeyFromESK(identityPrivKey)
|
||||
onionAddress = descriptor.AddressFromIdentityKey(pub)
|
||||
}
|
||||
|
||||
logrus.Warnf("Loaded onion %s from %s", onionAddress, keyFname)
|
||||
|
||||
return privKey, onionAddress
|
||||
}
|
||||
|
||||
func (s *Service) loadInstances(serviceConfigData ServiceConfig) []*Instance {
|
||||
p := Params()
|
||||
instances := make([]*Instance, 0)
|
||||
for _, configInstance := range serviceConfigData.Instances {
|
||||
newInstance := NewInstance(s.controller, configInstance.Address)
|
||||
instances = append(instances, newInstance)
|
||||
}
|
||||
|
||||
if p.AdaptWgEnabled() {
|
||||
p.AdaptWg().Add(len(instances))
|
||||
adaptWgCount := p.AdaptWgCount() + int64(len(instances))
|
||||
p.SetAdaptWgCount(adaptWgCount)
|
||||
logrus.Debugf("Adding more waitgroups... current count: %d", adaptWgCount)
|
||||
p.SetAdaptWgCount(int64(len(instances)))
|
||||
}
|
||||
|
||||
// Some basic validation
|
||||
for _, inst := range instances {
|
||||
if s.hasOnionAddress(inst.OnionAddress) {
|
||||
logrus.Errorf("Config file error. Did you configure your frontend (%s) as an instance?", s.OnionAddress)
|
||||
panic("BadServiceInit")
|
||||
}
|
||||
}
|
||||
return instances
|
||||
}
|
||||
|
||||
// Return True if this service has this onion address
|
||||
func (s *Service) hasOnionAddress(onionAddress string) bool {
|
||||
// Strip the ".onion" part of the address if it exists since some
|
||||
// subsystems don't use it (e.g. Tor sometimes omits it from control
|
||||
// port responses)
|
||||
myOnionAddress := strings.Replace(s.OnionAddress, ".onion", "", 1)
|
||||
theirOnionAddress := strings.Replace(onionAddress, ".onion", "", 1)
|
||||
return myOnionAddress == theirOnionAddress
|
||||
}
|
||||
|
||||
func (s *Service) PublishDescriptors() {
|
||||
s.publishDescriptor(true)
|
||||
s.publishDescriptor(false)
|
||||
}
|
||||
|
||||
func getRollingSubArr[T any](arr []T, idx, count int) (out []T) {
|
||||
begin := (idx * count) % len(arr)
|
||||
for i := 0; i < count; i++ {
|
||||
out = append(out, arr[begin])
|
||||
begin = (begin + 1) % len(arr)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Attempt to publish descriptor if needed.
|
||||
// If 'is_first_desc' is set then attempt to upload the first descriptor
|
||||
// of the service, otherwise the second.
|
||||
func (s *Service) publishDescriptor(isFirstDesc bool) {
|
||||
p := Params()
|
||||
if p.AdaptDistinctDescriptors() == 1 {
|
||||
_, timePeriodNumber := GetSrvAndTimePeriod(isFirstDesc, *s.consensus.Consensus())
|
||||
blindingParam := s.consensus.consensus.GetBlindingParam(s.getIdentityPubkeyBytes(), timePeriodNumber)
|
||||
desc, err := NewOBDescriptor(s.OnionAddress, s.identityPrivKey, blindingParam, nil, isFirstDesc, s.consensus.Consensus())
|
||||
if err != nil {
|
||||
if err == ErrBadDescriptor {
|
||||
return
|
||||
}
|
||||
panic(err)
|
||||
}
|
||||
blindedKey := desc.getBlindedKey()
|
||||
responsibleHsdirs, err := GetResponsibleHsdirs(blindedKey, isFirstDesc, s.consensus)
|
||||
if err != nil {
|
||||
if err == ErrEmptyHashRing {
|
||||
logrus.Warning("Can't publish desc with no hash ring. Delaying...")
|
||||
return
|
||||
}
|
||||
panic(err)
|
||||
}
|
||||
|
||||
introPointsForDistinctDesc, err := s.getIntrosForDistinctDesc()
|
||||
if err != nil {
|
||||
if err == ErrNotEnoughIntros {
|
||||
return
|
||||
}
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Iterate all hsdirs, and create a distinct descriptor with a distinct set of intro points for each of them
|
||||
for idx, hsdir := range responsibleHsdirs {
|
||||
introPoints := getRollingSubArr(introPointsForDistinctDesc, idx, p.NIntrosWanted())
|
||||
desc, err := NewOBDescriptor(s.OnionAddress, s.identityPrivKey, blindingParam, introPoints, isFirstDesc, s.consensus.Consensus())
|
||||
if err != nil {
|
||||
if err == ErrBadDescriptor {
|
||||
return
|
||||
}
|
||||
panic(err)
|
||||
}
|
||||
s.uploadDescriptor(s.controller, desc, []string{hsdir})
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if !s.shouldPublishDescriptorNow(isFirstDesc) {
|
||||
logrus.Infof("No reason to publish %t descriptor for %s", isFirstDesc, s.OnionAddress)
|
||||
return
|
||||
}
|
||||
|
||||
introPoints, err := s.getIntrosForDesc()
|
||||
if err != nil {
|
||||
if err == ErrNotEnoughIntros {
|
||||
return
|
||||
}
|
||||
panic(err)
|
||||
}
|
||||
|
||||
// Derive blinding parameter
|
||||
_, timePeriodNumber := GetSrvAndTimePeriod(isFirstDesc, *s.consensus.Consensus())
|
||||
blindingParam := s.consensus.consensus.GetBlindingParam(s.getIdentityPubkeyBytes(), timePeriodNumber)
|
||||
|
||||
desc, err := NewOBDescriptor(s.OnionAddress, s.identityPrivKey, blindingParam, introPoints, isFirstDesc, s.consensus.Consensus())
|
||||
if err != nil {
|
||||
if err == ErrBadDescriptor {
|
||||
return
|
||||
}
|
||||
panic(err)
|
||||
}
|
||||
|
||||
logrus.Infof("Service %s created %t descriptor (%d intro points) (blinding param: %x) (size: %d bytes). About to publish:",
|
||||
s.OnionAddress, isFirstDesc, desc.introSet.Len(), blindingParam, len(desc.v3Desc.String()))
|
||||
|
||||
// When we do a v3 HSPOST on the control port, Tor decodes the
|
||||
// descriptor and extracts the blinded pubkey to be used when uploading
|
||||
// the descriptor. So let's do the same to compute the responsible
|
||||
// HSDirs:
|
||||
blindedKey := desc.getBlindedKey()
|
||||
|
||||
// Calculate responsible HSDirs for our service
|
||||
responsibleHsdirs, err := GetResponsibleHsdirs(blindedKey, isFirstDesc, s.consensus)
|
||||
if err != nil {
|
||||
if err == ErrEmptyHashRing {
|
||||
logrus.Warning("Can't publish desc with no hash ring. Delaying...")
|
||||
return
|
||||
}
|
||||
panic(err)
|
||||
}
|
||||
|
||||
desc.setLastPublishAttemptTs(btime.Clock.Now().UTC())
|
||||
|
||||
logrus.Infof("Uploading descriptor for %s to %s", s.OnionAddress, responsibleHsdirs)
|
||||
|
||||
// Upload descriptor
|
||||
s.uploadDescriptor(s.controller, desc, responsibleHsdirs)
|
||||
|
||||
// It would be better to set last_upload_ts when an upload succeeds and
|
||||
// not when an upload is just attempted. Unfortunately the HS_DESC #
|
||||
// UPLOADED event does not provide information about the service and
|
||||
// so it can't be used to determine when descriptor upload succeeds
|
||||
desc.setLastUploadTs(btime.Clock.Now().UTC())
|
||||
desc.setResponsibleHsdirs(responsibleHsdirs)
|
||||
|
||||
// Set the descriptor
|
||||
if isFirstDesc {
|
||||
s.firstDescriptor = desc
|
||||
} else {
|
||||
s.secondDescriptor = desc
|
||||
}
|
||||
}
|
||||
|
||||
// Convenience method to upload a descriptor
|
||||
// Handle some error checking and logging inside the Service class
|
||||
func (s *Service) uploadDescriptor(controller *Controller, obDesc *OBDescriptor, hsdirs []string) {
|
||||
for {
|
||||
err := commonUploadDescriptor(controller, obDesc.v3Desc, hsdirs, obDesc.onionAddress)
|
||||
if err != nil {
|
||||
if err == ErrSocketClosed {
|
||||
logrus.Errorf("Error uploading descriptor for service %s.onion. Control port socket is closed.", obDesc.onionAddress)
|
||||
controller.ReAuthenticate()
|
||||
continue
|
||||
} else {
|
||||
logrus.Errorf("Error uploading descriptor for service %s.onion.: %v", obDesc.onionAddress, err)
|
||||
break
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
func commonUploadDescriptor(controller *Controller, signedDescriptor *descriptor.HiddenServiceDescriptorV3, hsdirs []string, v3OnionAddress string) error {
|
||||
logrus.Debug("Beginning service descriptor upload.")
|
||||
serverArgs := ""
|
||||
// Provide server fingerprints to control command if HSDirs are specified.
|
||||
if hsdirs != nil {
|
||||
strs := make([]string, 0)
|
||||
for _, hsDir := range hsdirs {
|
||||
strs = append(strs, "SERVER="+hsDir)
|
||||
}
|
||||
serverArgs += strings.Join(strs, " ")
|
||||
}
|
||||
if v3OnionAddress != "" {
|
||||
serverArgs += " HSADDRESS=" + strings.Replace(v3OnionAddress, ".onion", "", 1)
|
||||
}
|
||||
msg := fmt.Sprintf("+HSPOST %s\n%s\r\n.\r\n", serverArgs, signedDescriptor)
|
||||
res, err := controller.Msg(msg)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if res != "250 OK" {
|
||||
return fmt.Errorf("HSPOST returned unexpected response code: %s", res)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Returns a slice of intro points where duplicates have been removed.
|
||||
// Keep the original order.
|
||||
func unique(arr []descriptor.IntroductionPointV3) []descriptor.IntroductionPointV3 {
|
||||
out := make([]descriptor.IntroductionPointV3, 0, len(arr))
|
||||
cache := make(map[string]struct{})
|
||||
for _, el := range arr {
|
||||
if _, ok := cache[el.OnionKey]; !ok {
|
||||
out = append(out, el)
|
||||
cache[el.OnionKey] = struct{}{}
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
var ErrEmptyHashRing = errors.New("EmptyHashRing")
|
||||
var ErrBadDescriptor = errors.New("BadDescriptor")
|
||||
var ErrNotEnoughIntros = errors.New("NotEnoughIntros")
|
||||
|
||||
// Get all unique intros in a flat array
|
||||
func (s *Service) getIntrosForDistinctDesc() ([]descriptor.IntroductionPointV3, error) {
|
||||
allIntros := s.getAllIntrosForPublish()
|
||||
allIntrosFlat := allIntros.getIntroPointsFlat()
|
||||
uniqueIntros := unique(allIntrosFlat)
|
||||
finalIntros := uniqueIntros
|
||||
if len(finalIntros) == 0 {
|
||||
logrus.Info("Got no usable intro points from our instances. Delaying descriptor push...")
|
||||
return nil, ErrNotEnoughIntros
|
||||
}
|
||||
return finalIntros, nil
|
||||
}
|
||||
|
||||
// Get the intros that should be included in a descriptor for this service.
|
||||
func (s *Service) getIntrosForDesc() ([]descriptor.IntroductionPointV3, error) {
|
||||
p := Params()
|
||||
allIntros := s.getAllIntrosForPublish()
|
||||
|
||||
// Get number of instances that contributed to final intro point list
|
||||
nIntros := len(allIntros.introPoints)
|
||||
nIntrosWanted := nIntros * p.NIntrosPerInstance()
|
||||
|
||||
//Make sure not to pass the Tor process max of 20 introduction points
|
||||
if nIntrosWanted > 20 {
|
||||
nIntrosWanted = 20
|
||||
}
|
||||
|
||||
//Make sure to require at least 3 introduction points to prevent gobalance from being obvious in low instance counts
|
||||
if nIntrosWanted < 3 {
|
||||
nIntrosWanted = 3
|
||||
}
|
||||
|
||||
finalIntros := allIntros.choose(nIntrosWanted)
|
||||
if len(finalIntros) == 0 {
|
||||
logrus.Info("Got no usable intro points from our instances. Delaying descriptor push...")
|
||||
return nil, ErrNotEnoughIntros
|
||||
}
|
||||
|
||||
logrus.Infof("We got %d intros from %d instances. We want %d intros ourselves (got: %d)", len(allIntros.getIntroPointsFlat()), nIntros, nIntrosWanted, len(finalIntros))
|
||||
|
||||
return finalIntros, nil
|
||||
}
|
||||
|
||||
// Return an IntroductionPointSetV3 with all the intros of all the instances
|
||||
// of this service.
|
||||
func (s *Service) getAllIntrosForPublish() *IntroductionPointSetV3 {
|
||||
allIntros := make([][]descriptor.IntroductionPointV3, 0)
|
||||
p := Params()
|
||||
|
||||
// Sort instances to have newer descriptor received first.
|
||||
s.instancesMtx.Lock()
|
||||
sort.Slice(s.Instances, func(i, j int) bool {
|
||||
instIDescriptor := s.Instances[i].GetDescriptor()
|
||||
instJDescriptor := s.Instances[j].GetDescriptor()
|
||||
if instIDescriptor == nil || instIDescriptor.receivedTs == nil {
|
||||
return false
|
||||
}
|
||||
if instJDescriptor == nil || instJDescriptor.receivedTs == nil {
|
||||
return true
|
||||
}
|
||||
return instIDescriptor.receivedTs.After(*instJDescriptor.receivedTs)
|
||||
})
|
||||
s.instancesMtx.Unlock()
|
||||
|
||||
p.SetAdaptUp(0)
|
||||
p.SetAdaptDown(0)
|
||||
p.SetAdaptDownNoDescriptor(0)
|
||||
p.SetAdaptDownInstanceOld(0)
|
||||
p.SetAdaptFetchFail(0)
|
||||
|
||||
for _, inst := range s.GetInstances() {
|
||||
instanceIntros, err := inst.GetIntrosForPublish()
|
||||
if err != nil {
|
||||
if err == ErrInstanceHasNoDescriptor {
|
||||
logrus.Infof("Entirely missing a descriptor for instance %s. Continuing anyway if possible", inst.OnionAddress)
|
||||
continue
|
||||
} else if err == ErrInstanceIsOffline {
|
||||
logrus.Infof("Instance %s is offline. Ignoring its intro points...", inst.OnionAddress)
|
||||
continue
|
||||
}
|
||||
}
|
||||
allIntros = append(allIntros, instanceIntros)
|
||||
}
|
||||
adaptCount := p.AdaptUp() - p.AdaptDown()
|
||||
p.SetAdaptCount(adaptCount)
|
||||
logrus.Debugf("Current Adapt Count: %d", adaptCount)
|
||||
return NewIntroductionPointSetV3(allIntros)
|
||||
}
|
||||
|
||||
type IntroductionPointSet struct {
|
||||
}
|
||||
|
||||
type IntroductionPointSetV3 struct {
|
||||
IntroductionPointSet
|
||||
introPoints [][]descriptor.IntroductionPointV3
|
||||
}
|
||||
|
||||
func NewIntroductionPointSetV3(introductionPoints [][]descriptor.IntroductionPointV3) *IntroductionPointSetV3 {
|
||||
for _, instanceIps := range introductionPoints {
|
||||
for i := len(instanceIps) - 1; i >= 0; i-- {
|
||||
if instanceIps[i].LegacyKeyRaw != nil {
|
||||
logrus.Info("Ignoring introduction point with legacy key.")
|
||||
instanceIps = append(instanceIps[:i], instanceIps[i+1:]...)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
i := &IntroductionPointSetV3{}
|
||||
|
||||
for idx, instanceIntroPoints := range introductionPoints {
|
||||
rand.Shuffle(len(instanceIntroPoints), func(i, j int) {
|
||||
introductionPoints[idx][i], introductionPoints[idx][j] = introductionPoints[idx][j], introductionPoints[idx][i]
|
||||
})
|
||||
}
|
||||
rand.Shuffle(len(introductionPoints), func(i, j int) {
|
||||
introductionPoints[i], introductionPoints[j] = introductionPoints[j], introductionPoints[i]
|
||||
})
|
||||
i.introPoints = introductionPoints
|
||||
// self._intro_point_generator = self._get_intro_point()
|
||||
return i
|
||||
}
|
||||
|
||||
func (i IntroductionPointSetV3) Equals(other IntroductionPointSetV3) bool {
|
||||
aIntroPoints := i.getIntroPointsFlat()
|
||||
bIntroPoints := other.getIntroPointsFlat()
|
||||
sort.Slice(aIntroPoints, func(i, j int) bool { return aIntroPoints[i].OnionKey < aIntroPoints[j].OnionKey })
|
||||
sort.Slice(bIntroPoints, func(i, j int) bool { return bIntroPoints[i].OnionKey < bIntroPoints[j].OnionKey })
|
||||
if len(aIntroPoints) != len(bIntroPoints) {
|
||||
return false
|
||||
}
|
||||
for idx := 0; idx < len(aIntroPoints); idx++ {
|
||||
if !aIntroPoints[idx].Equals(bIntroPoints[idx]) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (i IntroductionPointSetV3) Len() (count int) {
|
||||
for _, ip := range i.introPoints {
|
||||
count += len(ip)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Flatten the .intro_points list of list into a single list and return it
|
||||
func (i IntroductionPointSetV3) getIntroPointsFlat() []descriptor.IntroductionPointV3 {
|
||||
flatten := make([]descriptor.IntroductionPointV3, 0)
|
||||
for _, ip := range i.introPoints {
|
||||
flatten = append(flatten, ip...)
|
||||
}
|
||||
return flatten
|
||||
}
|
||||
|
||||
// Retrieve N introduction points from the set of IPs
|
||||
// Where more than `count` IPs are available, introduction points are
|
||||
// selected to try and achieve the greatest distribution of introduction
|
||||
// points across all the available backend instances.
|
||||
// Return a list of IntroductionPoints.
|
||||
func (i IntroductionPointSetV3) choose(count int) []descriptor.IntroductionPointV3 {
|
||||
p := Params()
|
||||
choosenIps := i.getIntroPointsFlat()
|
||||
if p.AdaptShuffle() == 1 {
|
||||
rand.Shuffle(len(choosenIps), func(i, j int) { choosenIps[i], choosenIps[j] = choosenIps[j], choosenIps[i] })
|
||||
}
|
||||
if len(choosenIps) > count {
|
||||
choosenIps = choosenIps[:count]
|
||||
}
|
||||
return choosenIps
|
||||
}
|
||||
|
||||
// Return True if we should publish a descriptor right now
|
||||
func (s *Service) shouldPublishDescriptorNow(isFirstDesc bool) bool {
|
||||
p := Params()
|
||||
// If descriptor not yet uploaded, do it now!
|
||||
if isFirstDesc && s.firstDescriptor == nil {
|
||||
logrus.Debugf("Descriptor not uploaded!")
|
||||
return true
|
||||
}
|
||||
if !isFirstDesc && s.secondDescriptor == nil {
|
||||
logrus.Debugf("Second descriptor not uploaded!")
|
||||
return true
|
||||
}
|
||||
|
||||
if p.AdaptForcePublish() == 1 {
|
||||
return true
|
||||
}
|
||||
|
||||
if s.introSetModified(isFirstDesc) {
|
||||
logrus.Debugf("Intro set was modified!")
|
||||
}
|
||||
|
||||
if s.descriptorHasExpired(isFirstDesc) {
|
||||
logrus.Debugf("Descriptor expired!")
|
||||
}
|
||||
|
||||
if s.HsdirSetChanged(isFirstDesc) {
|
||||
logrus.Debugf("HSDIR set was changed!")
|
||||
}
|
||||
|
||||
// OK this is not the first time we publish a descriptor. Check various
|
||||
// parameters to see if we should try to publish again:
|
||||
return s.introSetModified(isFirstDesc) ||
|
||||
s.descriptorHasExpired(isFirstDesc) ||
|
||||
s.HsdirSetChanged(isFirstDesc)
|
||||
}
|
||||
|
||||
// Check if the introduction point set has changed since last publish.
|
||||
func (s *Service) introSetModified(isFirstDesc bool) bool {
|
||||
var lastUploadTs *time.Time
|
||||
if isFirstDesc {
|
||||
lastUploadTs = s.firstDescriptor.lastUploadTs
|
||||
} else {
|
||||
lastUploadTs = s.secondDescriptor.lastUploadTs
|
||||
}
|
||||
if lastUploadTs == nil {
|
||||
logrus.Info("\t Descriptor never published before. Do it now!")
|
||||
return true
|
||||
}
|
||||
for _, inst := range s.GetInstances() {
|
||||
if inst.IntroSetModifiedTimestamp == nil {
|
||||
logrus.Info("\t Still dont have a descriptor for this instance")
|
||||
continue
|
||||
}
|
||||
if (*inst.IntroSetModifiedTimestamp).After(*lastUploadTs) {
|
||||
logrus.Info("\t Intro set modified")
|
||||
return true
|
||||
}
|
||||
}
|
||||
logrus.Info("\t Intro set not modified")
|
||||
return false
|
||||
}
|
||||
|
||||
// Check if the descriptor has expired (hasn't been uploaded recently).
|
||||
// If 'is_first_desc' is set then check the first descriptor of the
|
||||
// service, otherwise the second.
|
||||
func (s *Service) descriptorHasExpired(isFirstDesc bool) bool {
|
||||
var lastUploadTs *time.Time
|
||||
if isFirstDesc {
|
||||
lastUploadTs = s.firstDescriptor.lastUploadTs
|
||||
} else {
|
||||
lastUploadTs = s.secondDescriptor.lastUploadTs
|
||||
}
|
||||
descriptorAge := time.Now().Sub(*lastUploadTs).Seconds()
|
||||
if descriptorAge > s.getDescriptorLifetime().Seconds() {
|
||||
logrus.Infof("\t Our %t descriptor has expired (%g seconds old). Uploading new one.", isFirstDesc, descriptorAge)
|
||||
return true
|
||||
}
|
||||
logrus.Infof("\t Our %t descriptor is still fresh (%g seconds old).", isFirstDesc, descriptorAge)
|
||||
return false
|
||||
}
|
||||
|
||||
// HsdirSetChanged return True if the HSDir has changed between the last upload of this
|
||||
// descriptor and the current state of things
|
||||
func (s *Service) HsdirSetChanged(isFirstDesc bool) bool {
|
||||
// Derive blinding parameter
|
||||
_, timePeriodNumber := GetSrvAndTimePeriod(isFirstDesc, *s.consensus.Consensus())
|
||||
blindedParam := s.consensus.Consensus().GetBlindingParam(s.getIdentityPubkeyBytes(), timePeriodNumber)
|
||||
|
||||
// Get blinded key
|
||||
blindedKey := util.BlindedPubkey(s.getIdentityPubkeyBytes(), blindedParam)
|
||||
|
||||
responsibleHsdirs, err := GetResponsibleHsdirs(blindedKey, isFirstDesc, s.consensus)
|
||||
if err != nil {
|
||||
if err == ErrEmptyHashRing {
|
||||
return false
|
||||
}
|
||||
panic(err)
|
||||
}
|
||||
|
||||
var previousResponsibleHsdirs []string
|
||||
if isFirstDesc {
|
||||
previousResponsibleHsdirs = s.firstDescriptor.responsibleHsdirs
|
||||
} else {
|
||||
previousResponsibleHsdirs = s.secondDescriptor.responsibleHsdirs
|
||||
}
|
||||
|
||||
sort.Strings(responsibleHsdirs)
|
||||
sort.Strings(previousResponsibleHsdirs)
|
||||
if len(responsibleHsdirs) != len(previousResponsibleHsdirs) {
|
||||
logrus.Infof("\t HSDir set changed (%s vs %s)", responsibleHsdirs, previousResponsibleHsdirs)
|
||||
return true
|
||||
}
|
||||
changed := false
|
||||
for i, el := range responsibleHsdirs {
|
||||
if previousResponsibleHsdirs[i] != el {
|
||||
changed = true
|
||||
}
|
||||
}
|
||||
if changed {
|
||||
logrus.Infof("\t HSDir set changed (%s vs %s)", responsibleHsdirs, previousResponsibleHsdirs)
|
||||
return true
|
||||
}
|
||||
|
||||
logrus.Info("\t HSDir set remained the same")
|
||||
return false
|
||||
}
|
||||
|
||||
func (s *Service) getIdentityPubkeyBytes() ed25519.PublicKey {
|
||||
return s.identityPrivKey.Public()
|
||||
}
|
||||
|
||||
func (s *Service) getDescriptorLifetime() time.Duration {
|
||||
//if onionbalance.OnionBalance().IsTestnet {
|
||||
// return param.FrontendDescriptorLifetimeTestnet
|
||||
//}
|
||||
p := Params()
|
||||
return time.Duration(p.FrontendDescriptorLifetime())
|
||||
}
|
@ -0,0 +1,15 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"github.com/stretchr/testify/assert"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestGetRollingSubArr(t *testing.T) {
|
||||
arr := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}
|
||||
assert.Equal(t, []int{1, 2, 3}, getRollingSubArr(arr, 0, 3))
|
||||
assert.Equal(t, []int{4, 5, 6}, getRollingSubArr(arr, 1, 3))
|
||||
assert.Equal(t, []int{7, 8, 9}, getRollingSubArr(arr, 2, 3))
|
||||
assert.Equal(t, []int{10, 1, 2}, getRollingSubArr(arr, 3, 3))
|
||||
assert.Equal(t, []int{3, 4, 5}, getRollingSubArr(arr, 4, 3))
|
||||
}
|
113
endgamefiles/sourcecode/gobalance/pkg/onionbalance/torNode.go
Normal file
113
endgamefiles/sourcecode/gobalance/pkg/onionbalance/torNode.go
Normal file
@ -0,0 +1,113 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"github.com/sirupsen/logrus"
|
||||
"golang.org/x/crypto/sha3"
|
||||
"strings"
|
||||
"sync"
|
||||
)
|
||||
|
||||
type TorNode struct {
|
||||
microdescriptor MicroDescriptor
|
||||
microdescriptorMtx sync.RWMutex
|
||||
routerstatus *RouterStatus
|
||||
routerstatusMtx sync.RWMutex
|
||||
}
|
||||
|
||||
func NewNode(microdescriptor MicroDescriptor, routerstatus *RouterStatus) *TorNode {
|
||||
logrus.Debugf("Initializing node with fpr %s", routerstatus.Fingerprint)
|
||||
|
||||
n := &TorNode{}
|
||||
n.setMicrodescriptor(microdescriptor)
|
||||
n.setRouterstatus(routerstatus)
|
||||
return n
|
||||
}
|
||||
|
||||
func (n *TorNode) getRouterstatus() *RouterStatus {
|
||||
n.routerstatusMtx.RLock()
|
||||
defer n.routerstatusMtx.RUnlock()
|
||||
return n.routerstatus
|
||||
}
|
||||
|
||||
func (n *TorNode) setRouterstatus(newVal *RouterStatus) {
|
||||
n.routerstatusMtx.Lock()
|
||||
defer n.routerstatusMtx.Unlock()
|
||||
n.routerstatus = newVal
|
||||
}
|
||||
|
||||
func (n *TorNode) getMicrodescriptor() MicroDescriptor {
|
||||
n.microdescriptorMtx.RLock()
|
||||
defer n.microdescriptorMtx.RUnlock()
|
||||
return n.microdescriptor
|
||||
}
|
||||
|
||||
func (n *TorNode) setMicrodescriptor(newVal MicroDescriptor) {
|
||||
n.microdescriptorMtx.Lock()
|
||||
defer n.microdescriptorMtx.Unlock()
|
||||
n.microdescriptor = newVal
|
||||
}
|
||||
|
||||
func (n *TorNode) GetHexFingerprint() Fingerprint {
|
||||
return n.getRouterstatus().Fingerprint
|
||||
}
|
||||
|
||||
var ErrNoHSDir = errors.New("NoHSDir")
|
||||
var ErrNoEd25519Identity = errors.New("NoEd25519Identity")
|
||||
|
||||
// GetHsdirIndex get the HSDir index for this node:
|
||||
//
|
||||
// hsdir_index(node) = H("node-idx" | node_identity |
|
||||
// shared_random_value |
|
||||
// INT_8(period_num) |
|
||||
// INT_8(period_length) )
|
||||
//
|
||||
// Raises NoHSDir or NoEd25519Identity in case of errors.
|
||||
func (n *TorNode) GetHsdirIndex(srv []byte, period_num int64, consensus *Consensus) ([]byte, error) {
|
||||
// See if this node can be an HSDir (it needs to be supported both in
|
||||
// protover and in flags)
|
||||
//arr, found := n.routerstatus.Protocols["HSDir"]
|
||||
//if !found {
|
||||
// panic("NoHSDir")
|
||||
//}
|
||||
//found = false
|
||||
//for _, el := range arr {
|
||||
// if 2 == el {
|
||||
// found = true
|
||||
// break
|
||||
// }
|
||||
//}
|
||||
//if !found {
|
||||
// panic("NoHSDir")
|
||||
//}
|
||||
if !n.getRouterstatus().Flags.HSDir {
|
||||
return nil, ErrNoHSDir
|
||||
}
|
||||
|
||||
// See if ed25519 identity is supported for this node
|
||||
if _, found := n.getMicrodescriptor().Identifiers["ed25519"]; !found {
|
||||
return nil, ErrNoEd25519Identity
|
||||
}
|
||||
|
||||
// In stem the ed25519 identity is a base64 string and we need to add
|
||||
// the missing padding so that the python base64 module can successfully
|
||||
// decode it.
|
||||
// TODO: Abstract this into its own function...
|
||||
ed25519NodeIdentityB64 := n.getMicrodescriptor().Identifiers["ed25519"]
|
||||
missingPadding := len(ed25519NodeIdentityB64) % 4
|
||||
ed25519NodeIdentityB64 += strings.Repeat("=", missingPadding)
|
||||
ed25519NodeIdentity, _ := base64.StdEncoding.DecodeString(ed25519NodeIdentityB64)
|
||||
|
||||
periodNumInt8 := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(periodNumInt8[len(periodNumInt8)-8:], uint64(period_num))
|
||||
periodLength := consensus.Consensus().GetTimePeriodLength()
|
||||
periodLengthInt8 := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(periodLengthInt8[len(periodLengthInt8)-8:], uint64(periodLength))
|
||||
|
||||
hashBody := "node-idx" + string(ed25519NodeIdentity) + string(srv) + string(periodNumInt8) + string(periodLengthInt8)
|
||||
hsdirIndex := sha3.Sum256([]byte(hashBody))
|
||||
|
||||
return hsdirIndex[:], nil
|
||||
}
|
@ -0,0 +1,22 @@
|
||||
package onionbalance
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/ed25519"
|
||||
)
|
||||
|
||||
// LoadTorKeyFromDisk load a private identity key from little-t-tor.
|
||||
func LoadTorKeyFromDisk(keyBytes []byte) ed25519.PrivateKey {
|
||||
if !bytes.Equal(keyBytes[:29], []byte("== ed25519v1-secret: type0 ==")) {
|
||||
panic("Tor key does not start with Tor header")
|
||||
}
|
||||
expandedSk := keyBytes[32:]
|
||||
|
||||
// The rest should be 64 bytes (a,h):
|
||||
// 32 bytes for secret scalar 'a'
|
||||
// 32 bytes for PRF key 'h'
|
||||
if len(expandedSk) != 64 {
|
||||
panic("Tor private key has the wrong length")
|
||||
}
|
||||
return expandedSk
|
||||
}
|
@ -0,0 +1,262 @@
|
||||
package descriptor
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"encoding/base64"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"github.com/sirupsen/logrus"
|
||||
"gobalance/pkg/btime"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Ed25519Certificate struct {
|
||||
Version uint8
|
||||
}
|
||||
|
||||
// Ed25519CertificateV1 version 1 Ed25519 certificate, which sign tor server and hidden service v3
|
||||
// descriptors.
|
||||
type Ed25519CertificateV1 struct {
|
||||
Ed25519Certificate
|
||||
Typ uint8
|
||||
typInt int64
|
||||
Expiration time.Time
|
||||
KeyType uint8
|
||||
Key ed25519.PublicKey
|
||||
Extensions []Ed25519Extension
|
||||
Signature []byte
|
||||
}
|
||||
|
||||
func (c Ed25519CertificateV1) pack() (out []byte) {
|
||||
out = append(out, c.Version)
|
||||
out = append(out, c.Typ)
|
||||
expiration := c.Expiration.Unix() / 3600
|
||||
expirationBytes := make([]byte, 4)
|
||||
binary.BigEndian.PutUint32(expirationBytes, uint32(expiration))
|
||||
out = append(out, expirationBytes...)
|
||||
out = append(out, c.KeyType)
|
||||
out = append(out, c.Key...)
|
||||
out = append(out, uint8(len(c.Extensions)))
|
||||
for _, ext := range c.Extensions {
|
||||
out = append(out, ext.Pack()...)
|
||||
}
|
||||
if c.Signature != nil {
|
||||
out = append(out, c.Signature...)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// ToBase64 Base64 encoded certificate data.
|
||||
func (c Ed25519CertificateV1) ToBase64() (out string) {
|
||||
b64 := strings.Join(splitByLength(base64.StdEncoding.EncodeToString(c.pack()), 64), "\n")
|
||||
out = fmt.Sprintf("-----BEGIN ED25519 CERT-----\n%s\n-----END ED25519 CERT-----", b64)
|
||||
return out
|
||||
}
|
||||
|
||||
func splitByLength(msg string, size int) (out []string) {
|
||||
for i := 0; i < len(msg); i += size {
|
||||
upper := i + size
|
||||
if i+size > len(msg) {
|
||||
upper = len(msg)
|
||||
}
|
||||
out = append(out, msg[i:upper])
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
const DefaultExpirationHours = 54 // HSv3 certificate expiration of tor
|
||||
|
||||
func NewEd25519CertificateV1(certType uint8, expiration *time.Time, keyType uint8, key ed25519.PublicKey,
|
||||
extensions []Ed25519Extension, signingKey ed25519.PrivateKey, signature []byte) Ed25519CertificateV1 {
|
||||
c := Ed25519CertificateV1{}
|
||||
c.Version = 1
|
||||
if certType == 0 {
|
||||
panic("Certificate type is required")
|
||||
} else if key == nil {
|
||||
panic("Certificate key is required")
|
||||
}
|
||||
if certType == 8 {
|
||||
c.Typ, c.typInt = HsV3DescSigning, 8
|
||||
} else if certType == 9 {
|
||||
c.Typ, c.typInt = HsV3IntroAuth, 9
|
||||
} else if certType == 11 {
|
||||
c.Typ, c.typInt = HsV3NtorEnc, 11
|
||||
} else {
|
||||
panic("unknown cert type")
|
||||
}
|
||||
if expiration == nil {
|
||||
c.Expiration = btime.Clock.Now().UTC().Add(DefaultExpirationHours * time.Hour)
|
||||
} else {
|
||||
c.Expiration = expiration.UTC()
|
||||
}
|
||||
c.KeyType = keyType
|
||||
c.Key = key
|
||||
c.Extensions = extensions
|
||||
c.Signature = signature
|
||||
if signingKey != nil {
|
||||
calculatedSig := ed25519.Sign(signingKey, c.pack())
|
||||
/*
|
||||
// if caller provides both signing key *and* signature then ensure they match
|
||||
if self.signature and self.signature != calculated_sig:
|
||||
raise ValueError("Signature calculated from its key (%s) mismatches '%s'" % (calculated_sig, self.signature))
|
||||
*/
|
||||
c.Signature = calculatedSig
|
||||
}
|
||||
if c.Typ == LINK || c.Typ == IDENTITY || c.Typ == AUTHENTICATE {
|
||||
logrus.Panicf("Ed25519 certificate cannot have a type of %d. This is reserved for CERTS cells.", c.typInt)
|
||||
} else if c.Typ == ED25519_IDENTITY {
|
||||
panic("Ed25519 certificate cannot have a type of 7. This is reserved for RSA identity cross-certification.")
|
||||
} else if c.Typ == 0 {
|
||||
logrus.Panicf("Ed25519 certificate type %d is unrecognized", c.typInt)
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
func (c *Ed25519CertificateV1) SigningKey() ed25519.PublicKey {
|
||||
for _, ext := range c.Extensions {
|
||||
if ext.Typ == HasSigningKey {
|
||||
return ext.Data
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
const (
|
||||
LINK = iota + 1
|
||||
IDENTITY
|
||||
AUTHENTICATE
|
||||
ED25519_SIGNING
|
||||
LINK_CERT
|
||||
ED25519_AUTHENTICATE
|
||||
ED25519_IDENTITY
|
||||
HsV3DescSigning
|
||||
HsV3IntroAuth
|
||||
NTOR_ONION_KEY
|
||||
HsV3NtorEnc
|
||||
)
|
||||
|
||||
// Ed25519CertificateV1Unpack parses a byte encoded ED25519 certificate.
|
||||
func Ed25519CertificateV1Unpack(content []byte) Ed25519CertificateV1 {
|
||||
if len(content) == 0 {
|
||||
logrus.Panicf("Failed to unpack ed25519 certificate")
|
||||
}
|
||||
version := content[0]
|
||||
if version != 1 {
|
||||
logrus.Panicf("Ed25519 certificate is version %c. Parser presently only supports version 1.", version)
|
||||
}
|
||||
return ed25519CertificateV1Unpack(content)
|
||||
}
|
||||
|
||||
// Ed25519CertificateFromBase64 parses a base64 encoded ED25519 certificate.
|
||||
func Ed25519CertificateFromBase64(content string) Ed25519CertificateV1 {
|
||||
if strings.HasPrefix(content, "-----BEGIN ED25519 CERT-----\n") &&
|
||||
strings.HasSuffix(content, "\n-----END ED25519 CERT-----") {
|
||||
content = strings.TrimPrefix(content, "-----BEGIN ED25519 CERT-----\n")
|
||||
content = strings.TrimSuffix(content, "\n-----END ED25519 CERT-----")
|
||||
}
|
||||
by, err := base64.StdEncoding.DecodeString(content)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
} else if len(by) == 0 {
|
||||
return Ed25519CertificateV1{}
|
||||
}
|
||||
return Ed25519CertificateV1Unpack(by)
|
||||
}
|
||||
|
||||
const (
|
||||
Ed25519KeyLength = 32
|
||||
Ed25519HeaderLength = 40
|
||||
Ed25519SignatureLength = 64
|
||||
)
|
||||
|
||||
func ed25519CertificateV1Unpack(content []byte) Ed25519CertificateV1 {
|
||||
if len(content) < Ed25519HeaderLength+Ed25519SignatureLength {
|
||||
logrus.Panicf("Ed25519 certificate was %d bytes, but should be at least %d", len(content), Ed25519HeaderLength+Ed25519SignatureLength)
|
||||
}
|
||||
header, signature := content[:len(content)-Ed25519SignatureLength], content[len(content)-Ed25519SignatureLength:]
|
||||
|
||||
version, header := header[0], header[1:]
|
||||
certType, header := header[0], header[1:]
|
||||
expirationHoursRaw, header := header[:4], header[4:]
|
||||
var expirationHours int64
|
||||
expirationHours |= int64(expirationHoursRaw[0]) << 24
|
||||
expirationHours |= int64(expirationHoursRaw[1]) << 16
|
||||
expirationHours |= int64(expirationHoursRaw[2]) << 8
|
||||
expirationHours |= int64(expirationHoursRaw[3])
|
||||
keyType, header := header[0], header[1:]
|
||||
key, header := header[:Ed25519KeyLength], header[Ed25519KeyLength:]
|
||||
extensionCount, extensionData := header[0], header[1:]
|
||||
if version != 1 {
|
||||
logrus.Panicf("Ed25519 v1 parser cannot read version %c certificates", version)
|
||||
}
|
||||
extensions := make([]Ed25519Extension, 0)
|
||||
for i := 0; i < int(extensionCount); i++ {
|
||||
var extension Ed25519Extension
|
||||
extension, extensionData = Ed25519ExtensionPop(extensionData)
|
||||
extensions = append(extensions, extension)
|
||||
}
|
||||
if len(extensionData) > 0 {
|
||||
logrus.Panicf("Ed25519 certificate had %d bytes of unused extension data", len(extensionData))
|
||||
}
|
||||
expiration := time.Unix(int64(expirationHours)*3600, 0)
|
||||
return NewEd25519CertificateV1(certType,
|
||||
&expiration,
|
||||
keyType, key, extensions, nil, signature)
|
||||
}
|
||||
|
||||
type Ed25519Extension struct {
|
||||
Typ uint8
|
||||
Flags []string
|
||||
FlagInt uint8
|
||||
Data []byte
|
||||
}
|
||||
|
||||
func NewEd25519Extension(extType, flagVal uint8, data []byte) Ed25519Extension {
|
||||
e := Ed25519Extension{}
|
||||
e.Typ = extType
|
||||
e.Flags = make([]string, 0)
|
||||
e.FlagInt = flagVal
|
||||
e.Data = data
|
||||
if flagVal > 0 && flagVal%2 == 1 {
|
||||
e.Flags = append(e.Flags, "AFFECTS_VALIDATION")
|
||||
flagVal -= 1
|
||||
}
|
||||
if flagVal > 0 {
|
||||
e.Flags = append(e.Flags, "UNKNOWN")
|
||||
}
|
||||
if extType == HasSigningKey && len(data) != 32 {
|
||||
logrus.Panicf("Ed25519 HAS_SIGNING_KEY extension must be 32 bytes, but was %d.", len(data))
|
||||
}
|
||||
return e
|
||||
}
|
||||
|
||||
func Ed25519ExtensionPop(content []byte) (Ed25519Extension, []byte) {
|
||||
if len(content) < 4 {
|
||||
panic("Ed25519 extension is missing header fields")
|
||||
}
|
||||
|
||||
dataSizeRaw, content := content[:2], content[2:]
|
||||
var dataSize int64
|
||||
dataSize |= int64(dataSizeRaw[0]) << 8
|
||||
dataSize |= int64(dataSizeRaw[1])
|
||||
extType, content := content[0], content[1:]
|
||||
flags, content := content[0], content[1:]
|
||||
data, content := content[:dataSize], content[dataSize:]
|
||||
|
||||
if int64(len(data)) != dataSize {
|
||||
logrus.Panicf("Ed25519 extension is truncated. It should have %d bytes of data but there's only %d.", dataSize, len(data))
|
||||
}
|
||||
|
||||
return NewEd25519Extension(extType, flags, data), content
|
||||
}
|
||||
|
||||
func (e Ed25519Extension) Pack() (out []byte) {
|
||||
dataSizeBytes := make([]byte, 2)
|
||||
binary.BigEndian.PutUint16(dataSizeBytes, uint16(len(e.Data)))
|
||||
out = append(out, dataSizeBytes...)
|
||||
out = append(out, e.Typ)
|
||||
out = append(out, e.FlagInt)
|
||||
out = append(out, e.Data...)
|
||||
return
|
||||
}
|
@ -0,0 +1,34 @@
|
||||
package descriptor
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestEd25519CertificateToBase64(t *testing.T) {
|
||||
certRaw := `-----BEGIN ED25519 CERT-----
|
||||
AQkABvnvASpbRl8c5Iwx+KYXIGHMA+66ZN88TppVrRqrwyZkv45UAQAgBABcfN7F
|
||||
QCPKVVMMIsn/OMg/XEQjOhfiqBB7DDU36l7dR+vl8qUr8ApIEPse2nAPmz8EscmY
|
||||
25grvptE/1o0mS1ynpEPmeFrGbUCVyWsntwLyn77bscvNdG8Mozov3bGFQU=
|
||||
-----END ED25519 CERT-----`
|
||||
cert := Ed25519CertificateFromBase64(certRaw)
|
||||
newCert := cert.ToBase64()
|
||||
assert.Equal(t, certRaw, newCert)
|
||||
}
|
||||
|
||||
func TestEd25519CertificateV1Pack(t *testing.T) {
|
||||
raw := "AQgABvnxAVx83sVAI8pVUwwiyf84yD9cRCM6F+KoEHsMNTfqXt1HAQAgBAB0tYzO/dvRZRujduw/KKmyulEhsEvjhVbhZ4ALCYkMgBpLO+hsNQqVdbTWvm5FrMZcyuCP4451WdpYlgOlsG8Mu3goFEM8B2KWQdzVpI69oq61geN5yzwnhO7zH/o1qwo="
|
||||
by1, _ := base64.StdEncoding.DecodeString(raw)
|
||||
cert := ed25519CertificateV1Unpack(by1)
|
||||
by2 := cert.pack()
|
||||
assert.Equal(t, by1, by2)
|
||||
}
|
||||
|
||||
func TestEd25519ExtensionPack(t *testing.T) {
|
||||
raw := "ACAEAHS1jM7929FlG6N27D8oqbK6USGwS+OFVuFngAsJiQyA"
|
||||
by1, _ := base64.StdEncoding.DecodeString(raw)
|
||||
ext, _ := Ed25519ExtensionPop(by1)
|
||||
by2 := ext.Pack()
|
||||
assert.Equal(t, by1, by2)
|
||||
}
|
@ -0,0 +1,832 @@
|
||||
package descriptor
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/aes"
|
||||
"crypto/cipher"
|
||||
"crypto/ed25519"
|
||||
"crypto/x509"
|
||||
"encoding/base32"
|
||||
"encoding/base64"
|
||||
"encoding/binary"
|
||||
"encoding/hex"
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/sirupsen/logrus"
|
||||
"gobalance/pkg/brand"
|
||||
"gobalance/pkg/btime"
|
||||
"gobalance/pkg/gobpk"
|
||||
"gobalance/pkg/stem/util"
|
||||
"golang.org/x/crypto/sha3"
|
||||
"maze.io/x/crypto/x25519"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Descriptor common parent for all types of descriptors.
|
||||
// https://github.com/torproject/torspec/blob/4da63977b86f4c17d0e8cf87ed492c72a4c9b2d9/rend-spec-v3.txt#L1057
|
||||
type Descriptor struct {
|
||||
HsDescriptorVersion int64
|
||||
descriptorLifetime int64
|
||||
DescriptorSigningKeyCert string
|
||||
revisionCounter int64
|
||||
superencrypted string
|
||||
signature string
|
||||
}
|
||||
|
||||
func (d *Descriptor) FromStr(content string) {
|
||||
*d = *descFromStr(content)
|
||||
}
|
||||
|
||||
func descFromStr(content string) *Descriptor {
|
||||
d := &Descriptor{}
|
||||
lines := strings.Split(content, "\n")
|
||||
startCert := false
|
||||
startSuperencrypted := false
|
||||
for idx, line := range lines {
|
||||
if idx == 0 {
|
||||
d.HsDescriptorVersion, _ = strconv.ParseInt(strings.TrimPrefix(line, "hs-descriptor "), 10, 64)
|
||||
continue
|
||||
} else if idx == 1 {
|
||||
d.descriptorLifetime, _ = strconv.ParseInt(strings.TrimPrefix(line, "descriptor-lifetime "), 10, 64)
|
||||
continue
|
||||
} else if line == "descriptor-signing-key-cert" {
|
||||
startCert = true
|
||||
continue
|
||||
} else if line == "superencrypted" {
|
||||
startSuperencrypted = true
|
||||
continue
|
||||
} else if strings.HasPrefix(line, "revision-counter ") {
|
||||
d.revisionCounter, _ = strconv.ParseInt(strings.TrimPrefix(line, "revision-counter "), 10, 64)
|
||||
continue
|
||||
} else if strings.HasPrefix(line, "signature ") {
|
||||
d.signature = strings.TrimPrefix(line, "signature ")
|
||||
continue
|
||||
}
|
||||
if startCert {
|
||||
d.DescriptorSigningKeyCert += line + "\n"
|
||||
if line == "-----END ED25519 CERT-----" {
|
||||
startCert = false
|
||||
d.DescriptorSigningKeyCert = strings.TrimSpace(d.DescriptorSigningKeyCert)
|
||||
}
|
||||
} else if startSuperencrypted {
|
||||
d.superencrypted += line + "\n"
|
||||
if line == "-----END MESSAGE-----" {
|
||||
startSuperencrypted = false
|
||||
d.superencrypted = strings.TrimSpace(d.superencrypted)
|
||||
}
|
||||
}
|
||||
}
|
||||
return d
|
||||
}
|
||||
|
||||
// BaseHiddenServiceDescriptor hidden service descriptor.
|
||||
type BaseHiddenServiceDescriptor struct {
|
||||
Descriptor
|
||||
}
|
||||
|
||||
const (
|
||||
// ExtensionType
|
||||
HasSigningKey = 4
|
||||
)
|
||||
|
||||
// HiddenServiceDescriptorV3 version 3 hidden service descriptor.
|
||||
type HiddenServiceDescriptorV3 struct {
|
||||
BaseHiddenServiceDescriptor
|
||||
SigningCert Ed25519CertificateV1
|
||||
InnerLayer *InnerLayer
|
||||
rawContents string
|
||||
}
|
||||
|
||||
func (d HiddenServiceDescriptorV3) String() string {
|
||||
var sb strings.Builder
|
||||
sb.WriteString("hs-descriptor 3\n")
|
||||
sb.WriteString("descriptor-lifetime ")
|
||||
sb.WriteString(strconv.FormatInt(d.descriptorLifetime, 10))
|
||||
sb.WriteByte('\n')
|
||||
sb.WriteString("descriptor-signing-key-cert\n")
|
||||
sb.WriteString(d.DescriptorSigningKeyCert)
|
||||
sb.WriteByte('\n')
|
||||
sb.WriteString("revision-counter ")
|
||||
sb.WriteString(strconv.FormatInt(d.revisionCounter, 10))
|
||||
sb.WriteByte('\n')
|
||||
sb.WriteString("superencrypted\n")
|
||||
sb.WriteString(d.superencrypted)
|
||||
sb.WriteByte('\n')
|
||||
sb.WriteString("signature ")
|
||||
sb.WriteString(d.signature)
|
||||
return sb.String()
|
||||
}
|
||||
|
||||
func blindedPubkey(identityKey gobpk.PrivateKey, blindingNonce []byte) ed25519.PublicKey {
|
||||
return util.BlindedPubkey(identityKey.Public(), blindingNonce)
|
||||
}
|
||||
|
||||
func blindedSign(msg []byte, identityKey gobpk.PrivateKey, blindedKey, blindingNonce []byte) []byte {
|
||||
if identityKey.IsPrivKeyInTorFormat() {
|
||||
return util.BlindedSignWithTorKey(msg, identityKey.Seed(), blindedKey, blindingNonce)
|
||||
} else {
|
||||
return util.BlindedSign(msg, identityKey.Seed(), blindedKey, blindingNonce)
|
||||
}
|
||||
}
|
||||
|
||||
func HiddenServiceDescriptorV3Content(blindingNonce []byte, identityKey gobpk.PrivateKey,
|
||||
descSigningKey ed25519.PrivateKey, innerLayer *InnerLayer, revCounter *int64) string {
|
||||
if innerLayer == nil {
|
||||
tmp := InnerLayerCreate(nil)
|
||||
innerLayer = &tmp
|
||||
}
|
||||
if descSigningKey == nil {
|
||||
_, descSigningKey, _ = ed25519.GenerateKey(brand.Reader())
|
||||
}
|
||||
if revCounter == nil {
|
||||
tmp := btime.Clock.Now().Unix()
|
||||
revCounter = &tmp
|
||||
}
|
||||
blindedKey := blindedPubkey(identityKey, blindingNonce)
|
||||
//if blinding_nonce != nil {
|
||||
// blindedKey = onionbalance.BlindedPubkey(identityKey, blinding_nonce)
|
||||
//}
|
||||
pub := identityKey.Public()
|
||||
subcredential := subcredential(pub, blindedKey)
|
||||
|
||||
//if outerLayer == nil {
|
||||
outerLayer := OuterLayerCreate(innerLayer, revCounter, subcredential, blindedKey)
|
||||
//}
|
||||
|
||||
// if {
|
||||
signingCert := getSigningCert(blindedKey, descSigningKey, identityKey, blindingNonce)
|
||||
// }
|
||||
|
||||
descContent := "hs-descriptor 3\n"
|
||||
descContent += fmt.Sprintf("descriptor-lifetime %d\n", 180)
|
||||
descContent += "descriptor-signing-key-cert\n"
|
||||
descContent += signingCert.ToBase64() + "\n"
|
||||
descContent += fmt.Sprintf("revision-counter %d\n", *revCounter)
|
||||
descContent += "superencrypted\n"
|
||||
descContent += outerLayer.encrypt(*revCounter, subcredential, blindedKey) + "\n"
|
||||
|
||||
sigContent := SigPrefixHsV3 + descContent
|
||||
sig := ed25519.Sign(descSigningKey, []byte(sigContent))
|
||||
descContent += fmt.Sprintf("signature %s", strings.TrimRight(base64.StdEncoding.EncodeToString(sig), "="))
|
||||
|
||||
return descContent
|
||||
}
|
||||
|
||||
func priv2Pem(pk ed25519.PrivateKey) string {
|
||||
var identityKeyPem bytes.Buffer
|
||||
identityKeyBytes, _ := x509.MarshalPKCS8PrivateKey(pk)
|
||||
block := &pem.Block{Type: "PRIVATE KEY", Bytes: identityKeyBytes}
|
||||
_ = pem.Encode(&identityKeyPem, block)
|
||||
return identityKeyPem.String()
|
||||
}
|
||||
|
||||
func getSigningCert(blindedKey ed25519.PublicKey, descSigningKey ed25519.PrivateKey, identityKey gobpk.PrivateKey, blindingNonce []byte) Ed25519CertificateV1 {
|
||||
extensions := []Ed25519Extension{NewEd25519Extension(HasSigningKey, 0, blindedKey)}
|
||||
signingCert := NewEd25519CertificateV1(HsV3DescSigning, nil, 1, descSigningKey.Public().(ed25519.PublicKey), extensions, nil, nil)
|
||||
signingCert.Signature = blindedSign(signingCert.pack(), identityKey, blindedKey, blindingNonce)
|
||||
return signingCert
|
||||
}
|
||||
|
||||
const SigPrefixHsV3 = "Tor onion service descriptor sig v3"
|
||||
|
||||
func HiddenServiceDescriptorV3Create(blindingNonce []byte, identityPrivKey gobpk.PrivateKey, descSigningKey ed25519.PrivateKey, v3DescInnerLayer InnerLayer, revCounter int64) *HiddenServiceDescriptorV3 {
|
||||
return NewHiddenServiceDescriptorV3(HiddenServiceDescriptorV3Content(blindingNonce, identityPrivKey, descSigningKey, &v3DescInnerLayer, &revCounter))
|
||||
}
|
||||
|
||||
func NewHiddenServiceDescriptorV3(rawContents string) *HiddenServiceDescriptorV3 {
|
||||
d := &HiddenServiceDescriptorV3{}
|
||||
d.rawContents = rawContents
|
||||
d.Descriptor.FromStr(rawContents)
|
||||
d.SigningCert = Ed25519CertificateFromBase64(d.DescriptorSigningKeyCert)
|
||||
|
||||
//lines := strings.Split(rawContents, "\n")
|
||||
//startCert := false
|
||||
//startSuperencrypted := false
|
||||
//for idx, line := range lines {
|
||||
// if idx == 0 {
|
||||
// d.HsDescriptorVersion, _ = strconv.ParseInt(strings.TrimPrefix(line, "hs-descriptor "), 10, 64)
|
||||
// continue
|
||||
// } else if idx == 1 {
|
||||
// d.descriptorLifetime, _ = strconv.ParseInt(strings.TrimPrefix(line, "descriptor-lifetime "), 10, 64)
|
||||
// continue
|
||||
// } else if line == "descriptor-signing-key-cert" {
|
||||
// startCert = true
|
||||
// continue
|
||||
// } else if line == "superencrypted" {
|
||||
// startSuperencrypted = true
|
||||
// continue
|
||||
// } else if strings.HasPrefix(line, "revision-counter ") {
|
||||
// d.revisionCounter, _ = strconv.ParseInt(strings.TrimPrefix(line, "revision-counter "), 10, 64)
|
||||
// continue
|
||||
// } else if strings.HasPrefix(line, "signature ") {
|
||||
// d.signature = strings.TrimPrefix(line, "signature ")
|
||||
// continue
|
||||
// }
|
||||
// if startCert {
|
||||
// d.DescriptorSigningKeyCert += line + "\n"
|
||||
// if line == "-----END ED25519 CERT-----" {
|
||||
// startCert = false
|
||||
// d.DescriptorSigningKeyCert = strings.TrimSpace(d.DescriptorSigningKeyCert)
|
||||
// }
|
||||
// } else if startSuperencrypted {
|
||||
// d.superencrypted += line + "\n"
|
||||
// if line == "-----END MESSAGE-----" {
|
||||
// startSuperencrypted = false
|
||||
// d.superencrypted = strings.TrimSpace(d.superencrypted)
|
||||
// }
|
||||
// }
|
||||
//}
|
||||
|
||||
// TODO - n0tr1v
|
||||
return d
|
||||
}
|
||||
|
||||
func (d *HiddenServiceDescriptorV3) Decrypt(onionAddress string) (i *InnerLayer, err error) {
|
||||
if d.InnerLayer == nil {
|
||||
descriptorSigningKeyCert := d.DescriptorSigningKeyCert
|
||||
cert := Ed25519CertificateFromBase64(descriptorSigningKeyCert)
|
||||
blindedKey := cert.SigningKey()
|
||||
if blindedKey == nil {
|
||||
return d.InnerLayer, errors.New("no signing key is present")
|
||||
}
|
||||
identityPublicKey := IdentityKeyFromAddress(onionAddress)
|
||||
subcredential := subcredential(identityPublicKey, blindedKey)
|
||||
outerLayer := outerLayerDecrypt(d.superencrypted, d.revisionCounter, subcredential, blindedKey)
|
||||
tmp := innerLayerDecrypt(outerLayer, d.revisionCounter, subcredential, blindedKey)
|
||||
d.InnerLayer = &tmp
|
||||
}
|
||||
return d.InnerLayer, nil
|
||||
}
|
||||
|
||||
type InnerLayer struct {
|
||||
outer OuterLayer
|
||||
IntroductionPoints []IntroductionPointV3
|
||||
unparsedIntroductionPoints string
|
||||
rawContents string
|
||||
}
|
||||
|
||||
func (l InnerLayer) encrypt(revisionCounter int64, subcredential, blindedKey []byte) string {
|
||||
// encrypt back into an outer layer's 'encrypted' field
|
||||
return encryptLayer(l.getBytes(), "hsdir-encrypted-data", revisionCounter, subcredential, blindedKey)
|
||||
}
|
||||
|
||||
func (l InnerLayer) getBytes() []byte {
|
||||
return []byte(l.rawContents)
|
||||
}
|
||||
|
||||
func InnerLayerContent(introductionPoints []IntroductionPointV3) string {
|
||||
var sb strings.Builder
|
||||
sb.WriteString("create2-formats 2")
|
||||
if introductionPoints != nil {
|
||||
for _, ip := range introductionPoints {
|
||||
sb.WriteByte('\n')
|
||||
sb.WriteString(ip.encode())
|
||||
}
|
||||
}
|
||||
return sb.String()
|
||||
}
|
||||
|
||||
func InnerLayerCreate(introductionPoints []IntroductionPointV3) InnerLayer {
|
||||
return NewInnerLayer(InnerLayerContent(introductionPoints), OuterLayer{})
|
||||
}
|
||||
|
||||
func NewInnerLayer(content string, outerLayer OuterLayer) InnerLayer {
|
||||
l := InnerLayer{}
|
||||
l.rawContents = content
|
||||
l.outer = outerLayer
|
||||
div := strings.Index(content, "\nintroduction-point ")
|
||||
if div != -1 {
|
||||
l.unparsedIntroductionPoints = content[div+1:]
|
||||
content = content[:div]
|
||||
} else {
|
||||
l.unparsedIntroductionPoints = ""
|
||||
}
|
||||
//entries := descriptor_components(content, validate)
|
||||
l.parseV3IntroductionPoints()
|
||||
return l
|
||||
}
|
||||
|
||||
type IntroductionPointV3 struct {
|
||||
LinkSpecifiers []LinkSpecifier
|
||||
OnionKey string
|
||||
EncKey string
|
||||
AuthKeyCertRaw string
|
||||
EncKeyCertRaw string
|
||||
AuthKeyCert Ed25519CertificateV1
|
||||
EncKeyCert Ed25519CertificateV1
|
||||
LegacyKeyRaw any
|
||||
}
|
||||
|
||||
func (i IntroductionPointV3) Equals(other IntroductionPointV3) bool {
|
||||
return i.encode() == other.encode()
|
||||
}
|
||||
|
||||
// Descriptor representation of this introduction point.
|
||||
func (i IntroductionPointV3) encode() string {
|
||||
var sb strings.Builder
|
||||
linkCount := uint8(len(i.LinkSpecifiers))
|
||||
linkSpecifiers := []byte{linkCount}
|
||||
for _, ls := range i.LinkSpecifiers {
|
||||
linkSpecifiers = append(linkSpecifiers, ls.pack()...)
|
||||
}
|
||||
sb.WriteString("introduction-point ")
|
||||
sb.WriteString(base64.StdEncoding.EncodeToString(linkSpecifiers))
|
||||
sb.WriteString("\n")
|
||||
|
||||
sb.WriteString("onion-key ntor ")
|
||||
sb.WriteString(i.OnionKey)
|
||||
sb.WriteString("\n")
|
||||
|
||||
sb.WriteString("auth-key\n")
|
||||
sb.WriteString(i.AuthKeyCertRaw)
|
||||
sb.WriteString("\n")
|
||||
|
||||
if i.EncKey != "" {
|
||||
sb.WriteString("enc-key ntor ")
|
||||
sb.WriteString(i.EncKey)
|
||||
sb.WriteString("\n")
|
||||
}
|
||||
sb.WriteString("enc-key-cert\n")
|
||||
sb.WriteString(i.EncKeyCertRaw)
|
||||
return sb.String()
|
||||
}
|
||||
|
||||
/**
|
||||
// Descriptor representation of this introduction point.
|
||||
func (i IntroductionPointV3) encode() string {
|
||||
out := strings.Builder{}
|
||||
linkCount := uint8(len(i.LinkSpecifiers))
|
||||
linkSpecifiers := []byte{linkCount}
|
||||
for _, ls := range i.LinkSpecifiers {
|
||||
linkSpecifiers = append(linkSpecifiers, ls.pack()...)
|
||||
}
|
||||
out.WriteString(fmt.Sprintf("introduction-point %s\n", base64.StdEncoding.EncodeToString(linkSpecifiers)))
|
||||
out.WriteString(fmt.Sprintf("onion-key ntor %s\n", i.OnionKey))
|
||||
out.WriteString(fmt.Sprintf("auth-key\n%s\n", i.AuthKeyCertRaw))
|
||||
if i.EncKey != "" {
|
||||
out.WriteString(fmt.Sprintf("enc-key ntor %s\n", i.EncKey))
|
||||
}
|
||||
out.WriteString(fmt.Sprintf("enc-key-cert\n%s", i.EncKeyCertRaw))
|
||||
return out.String()
|
||||
}
|
||||
*/
|
||||
|
||||
func parseLinkSpecifier(content string) []LinkSpecifier {
|
||||
decoded, err := base64.StdEncoding.DecodeString(content)
|
||||
if err != nil {
|
||||
logrus.Panicf("Unable to base64 decode introduction point (%v): %s", err, content)
|
||||
}
|
||||
content = string(decoded)
|
||||
linkSpecifiers := make([]LinkSpecifier, 0)
|
||||
count, content := content[0], content[1:]
|
||||
for i := 0; i < int(count); i++ {
|
||||
var linkSpecifier LinkSpecifier
|
||||
linkSpecifier, content = linkSpecifierPop(content)
|
||||
linkSpecifiers = append(linkSpecifiers, linkSpecifier)
|
||||
}
|
||||
if len(content) > 0 {
|
||||
logrus.Panicf("Introduction point had excessive data (%s)", content)
|
||||
}
|
||||
return linkSpecifiers
|
||||
}
|
||||
|
||||
type LinkSpecifier struct {
|
||||
Typ uint8
|
||||
Value []byte
|
||||
}
|
||||
|
||||
func (l LinkSpecifier) String() string {
|
||||
return fmt.Sprintf("T:%d,V:%x", l.Typ, l.Value)
|
||||
}
|
||||
|
||||
func (l LinkSpecifier) pack() (out []byte) {
|
||||
out = append(out, l.Typ)
|
||||
out = append(out, uint8(len(l.Value)))
|
||||
out = append(out, l.Value...)
|
||||
return
|
||||
}
|
||||
|
||||
func linkSpecifierPop(packed string) (LinkSpecifier, string) {
|
||||
linkType, packed := packed[0], packed[1:]
|
||||
valueSize, packed := packed[0], packed[1:]
|
||||
if int(valueSize) > len(packed) {
|
||||
logrus.Panicf("Link specifier should have %d bytes, but only had %d remaining", valueSize, len(packed))
|
||||
}
|
||||
value, packed := packed[:valueSize], packed[valueSize:]
|
||||
if linkType == 0 {
|
||||
return LinkByIPv4Unpack(value).LinkSpecifier, packed
|
||||
} else if linkType == 1 {
|
||||
return LinkByIPv6Unpack(value).LinkSpecifier, packed
|
||||
} else if linkType == 2 {
|
||||
return NewLinkByFingerprint([]byte(value)).LinkSpecifier, packed
|
||||
} else if linkType == 3 {
|
||||
return NewLinkByEd25519([]byte(value)).LinkSpecifier, packed
|
||||
}
|
||||
return LinkSpecifier{Typ: linkType, Value: []byte(value)}, packed // unrecognized type
|
||||
}
|
||||
|
||||
type LinkByIPv4 struct {
|
||||
LinkSpecifier
|
||||
Address string
|
||||
Port uint16
|
||||
}
|
||||
|
||||
func NewLinkByIPv4(address string, port uint16) LinkByIPv4 {
|
||||
portBytes := make([]byte, 2)
|
||||
binary.BigEndian.PutUint16(portBytes, port)
|
||||
l := LinkByIPv4{}
|
||||
l.Typ = 0
|
||||
l.Value = append(packIPV4Address(address), portBytes...)
|
||||
l.Address = address
|
||||
l.Port = port
|
||||
return l
|
||||
}
|
||||
|
||||
func LinkByIPv4Unpack(value string) LinkByIPv4 {
|
||||
if len(value) != 6 {
|
||||
logrus.Panicf("IPv4 link specifiers should be six bytes, but was %d instead: %x", len(value), value)
|
||||
}
|
||||
addr, portRaw := value[:4], value[4:]
|
||||
port := binary.BigEndian.Uint16([]byte(portRaw))
|
||||
return NewLinkByIPv4(unpackIPV4Address([]byte(addr)), port)
|
||||
}
|
||||
|
||||
func NewLinkByIPv6(address string, port uint16) LinkByIPv6 {
|
||||
portBytes := make([]byte, 2)
|
||||
binary.BigEndian.PutUint16(portBytes, port)
|
||||
l := LinkByIPv6{}
|
||||
l.Typ = 1
|
||||
l.Value = append(packIPV6Address(address), portBytes...)
|
||||
l.Address = address
|
||||
l.Port = port
|
||||
return l
|
||||
}
|
||||
|
||||
func LinkByIPv6Unpack(value string) LinkByIPv6 {
|
||||
if len(value) != 18 {
|
||||
logrus.Panicf("IPv6 link specifiers should be eighteen bytes, but was %d instead: %x", len(value), value)
|
||||
}
|
||||
addr, portRaw := value[:16], value[16:]
|
||||
port := binary.BigEndian.Uint16([]byte(portRaw))
|
||||
return NewLinkByIPv6(unpackIPV6Address([]byte(addr)), port)
|
||||
}
|
||||
|
||||
func packIPV4Address(address string) (out []byte) {
|
||||
parts := strings.Split(address, ".")
|
||||
for _, part := range parts {
|
||||
tmp, _ := strconv.ParseUint(part, 10, 8)
|
||||
out = append(out, uint8(tmp))
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func unpackIPV4Address(value []byte) string {
|
||||
strs := make([]string, 0)
|
||||
for i := 0; i < 4; i++ {
|
||||
strs = append(strs, fmt.Sprintf("%d", value[i]))
|
||||
}
|
||||
return strings.Join(strs, ".")
|
||||
}
|
||||
|
||||
func packIPV6Address(address string) (out []byte) {
|
||||
parts := strings.Split(address, ":")
|
||||
for _, part := range parts {
|
||||
tmp, _ := hex.DecodeString(part)
|
||||
out = append(out, tmp...)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func unpackIPV6Address(value []byte) string {
|
||||
strs := make([]string, 0)
|
||||
for i := 0; i < 8; i++ {
|
||||
strs = append(strs, fmt.Sprintf("%04x", value[i*2:(i+1)*2]))
|
||||
}
|
||||
return strings.Join(strs, ":")
|
||||
}
|
||||
|
||||
type LinkByIPv6 struct {
|
||||
LinkSpecifier
|
||||
Address string
|
||||
Port uint16
|
||||
}
|
||||
|
||||
type LinkByFingerprint struct {
|
||||
LinkSpecifier
|
||||
Fingerprint []byte
|
||||
}
|
||||
|
||||
type LinkByEd25519 struct {
|
||||
LinkSpecifier
|
||||
Fingerprint []byte
|
||||
}
|
||||
|
||||
func NewLinkByFingerprint(value []byte) LinkByFingerprint {
|
||||
if len(value) != 20 {
|
||||
logrus.Panicf("Fingerprint link specifiers should be twenty bytes, but was %d instead: %x", len(value), value)
|
||||
}
|
||||
l := LinkByFingerprint{}
|
||||
l.Typ = 2
|
||||
l.Value = value
|
||||
l.Fingerprint = value
|
||||
return l
|
||||
}
|
||||
|
||||
func NewLinkByEd25519(value []byte) LinkByEd25519 {
|
||||
if len(value) != 32 {
|
||||
logrus.Panicf("Fingerprint link specifiers should be thirty two bytes, but was %d instead: %x", len(value), value)
|
||||
}
|
||||
l := LinkByEd25519{}
|
||||
l.Typ = 3
|
||||
l.Value = value
|
||||
l.Fingerprint = value
|
||||
return l
|
||||
}
|
||||
|
||||
func introductionPointV3Parse(content string) IntroductionPointV3 {
|
||||
ip := IntroductionPointV3{}
|
||||
authKeyCertContent := ""
|
||||
encKeyCertContent := ""
|
||||
lines := strings.Split(content, "\n")
|
||||
startAuthKey := false
|
||||
startEncKeyCert := false
|
||||
for _, line := range lines {
|
||||
if line == "auth-key" {
|
||||
startAuthKey = true
|
||||
continue
|
||||
} else if strings.HasPrefix(line, "introduction-point ") {
|
||||
ip.LinkSpecifiers = parseLinkSpecifier(strings.TrimPrefix(line, "introduction-point "))
|
||||
continue
|
||||
} else if strings.HasPrefix(line, "onion-key ntor ") {
|
||||
ip.OnionKey = strings.TrimPrefix(line, "onion-key ntor ")
|
||||
continue
|
||||
} else if strings.HasPrefix(line, "enc-key ntor ") {
|
||||
ip.EncKey = strings.TrimPrefix(line, "enc-key ntor ")
|
||||
continue
|
||||
} else if line == "enc-key-cert" {
|
||||
startEncKeyCert = true
|
||||
continue
|
||||
}
|
||||
if startAuthKey {
|
||||
authKeyCertContent += line + "\n"
|
||||
if line == "-----END ED25519 CERT-----" {
|
||||
startAuthKey = false
|
||||
authKeyCertContent = strings.TrimSpace(authKeyCertContent)
|
||||
}
|
||||
}
|
||||
if startEncKeyCert {
|
||||
encKeyCertContent += line + "\n"
|
||||
if line == "-----END ED25519 CERT-----" {
|
||||
startEncKeyCert = false
|
||||
encKeyCertContent = strings.TrimSpace(encKeyCertContent)
|
||||
}
|
||||
}
|
||||
}
|
||||
ip.AuthKeyCertRaw = authKeyCertContent
|
||||
ip.EncKeyCertRaw = encKeyCertContent
|
||||
ip.AuthKeyCert = Ed25519CertificateFromBase64(authKeyCertContent)
|
||||
ip.EncKeyCert = Ed25519CertificateFromBase64(encKeyCertContent)
|
||||
return ip
|
||||
}
|
||||
|
||||
func (l *InnerLayer) parseV3IntroductionPoints() {
|
||||
introductionPoints := make([]IntroductionPointV3, 0)
|
||||
remaining := l.unparsedIntroductionPoints
|
||||
for remaining != "" {
|
||||
div := strings.Index(remaining, "\nintroduction-point ")
|
||||
var content string
|
||||
if div != -1 {
|
||||
content = remaining[:div]
|
||||
remaining = remaining[div+1:]
|
||||
} else {
|
||||
content = remaining
|
||||
remaining = ""
|
||||
}
|
||||
introductionPoints = append(introductionPoints, introductionPointV3Parse(content))
|
||||
}
|
||||
l.IntroductionPoints = introductionPoints
|
||||
}
|
||||
|
||||
func innerLayerDecrypt(outerLayer OuterLayer, revisionCounter int64, subcredential, blindedKey ed25519.PublicKey) InnerLayer {
|
||||
plaintext := decryptLayer(outerLayer.encrypted, "hsdir-encrypted-data", revisionCounter, subcredential, blindedKey)
|
||||
return NewInnerLayer(plaintext, outerLayer)
|
||||
}
|
||||
|
||||
type OuterLayer struct {
|
||||
encrypted string
|
||||
rawContent string
|
||||
}
|
||||
|
||||
func (l OuterLayer) encrypt(revisionCounter int64, subcredential, blindedKey []byte) string {
|
||||
// Spec mandated padding: "Before encryption the plaintext is padded with
|
||||
// NUL bytes to the nearest multiple of 10k bytes."
|
||||
content := append(l.getBytes(), bytes.Repeat([]byte("\x00"), len(l.getBytes())%10000)...)
|
||||
// encrypt back into a hidden service descriptor's 'superencrypted' field
|
||||
return encryptLayer(content, "hsdir-superencrypted-data", revisionCounter, subcredential, blindedKey)
|
||||
}
|
||||
|
||||
func encryptLayer(plaintext []byte, constant string, revisionCounter int64, subcredential, blindedKey []byte) string {
|
||||
salt := make([]byte, 16)
|
||||
_, _ = brand.Read(salt)
|
||||
return encryptLayerDet(plaintext, constant, revisionCounter, subcredential, blindedKey, salt)
|
||||
}
|
||||
|
||||
// Deterministic code for tests
|
||||
func encryptLayerDet(plaintext []byte, constant string, revisionCounter int64, subcredential, blindedKey, salt []byte) string {
|
||||
ciphr, macFor := layerCipher(constant, revisionCounter, subcredential, blindedKey, salt)
|
||||
ciphertext := make([]byte, len(plaintext))
|
||||
ciphr.XORKeyStream(ciphertext, plaintext)
|
||||
encoded := base64.StdEncoding.EncodeToString([]byte(string(salt) + string(ciphertext) + string(macFor(ciphertext))))
|
||||
splits := splitByLength(encoded, 64)
|
||||
joined := strings.Join(splits, "\n")
|
||||
return fmt.Sprintf("-----BEGIN MESSAGE-----\n%s\n-----END MESSAGE-----", joined)
|
||||
}
|
||||
|
||||
func (l OuterLayer) getBytes() []byte {
|
||||
return []byte(l.rawContent)
|
||||
}
|
||||
|
||||
func OuterLayerCreate(innerLayer *InnerLayer, revisionCounter *int64, subcredential, blindedKey []byte) OuterLayer {
|
||||
return NewOuterLayer(OuterLayerContent(innerLayer, revisionCounter, subcredential, blindedKey))
|
||||
}
|
||||
|
||||
// AuthorizedClient Client authorized to use a v3 hidden service.
|
||||
// id: base64 encoded client id
|
||||
// iv: base64 encoded randomized initialization vector
|
||||
// cookie: base64 encoded authentication cookie
|
||||
type AuthorizedClient struct {
|
||||
id string
|
||||
iv string
|
||||
cookie string
|
||||
}
|
||||
|
||||
func NewAuthorizedClient() AuthorizedClient {
|
||||
a := AuthorizedClient{}
|
||||
idBytes := make([]byte, 8)
|
||||
_, _ = brand.Read(idBytes)
|
||||
a.id = strings.TrimRight(base64.StdEncoding.EncodeToString(idBytes), "=")
|
||||
ivBytes := make([]byte, 16)
|
||||
_, _ = brand.Read(ivBytes)
|
||||
a.iv = strings.TrimRight(base64.StdEncoding.EncodeToString(ivBytes), "=")
|
||||
cookieBytes := make([]byte, 16)
|
||||
_, _ = brand.Read(cookieBytes)
|
||||
a.cookie = strings.TrimRight(base64.StdEncoding.EncodeToString(cookieBytes), "=")
|
||||
return a
|
||||
}
|
||||
|
||||
func OuterLayerContent(innerLayer *InnerLayer, revisionCounter *int64, subcredential, blindedKey []byte) string {
|
||||
if innerLayer == nil {
|
||||
tmp := InnerLayerCreate(nil)
|
||||
innerLayer = &tmp
|
||||
}
|
||||
|
||||
authorizedClients := make([]AuthorizedClient, 0)
|
||||
for i := 0; i < 16; i++ {
|
||||
authorizedClients = append(authorizedClients, NewAuthorizedClient())
|
||||
}
|
||||
|
||||
pk, _ := x25519.GenerateKey(brand.Reader())
|
||||
|
||||
out := "desc-auth-type x25519\n"
|
||||
out += "desc-auth-ephemeral-key " + base64.StdEncoding.EncodeToString(pk.PublicKey.Bytes()) + "\n"
|
||||
for _, c := range authorizedClients {
|
||||
out += fmt.Sprintf("auth-client %s %s %s\n", c.id, c.iv, c.cookie)
|
||||
}
|
||||
out += "encrypted\n"
|
||||
out += innerLayer.encrypt(*revisionCounter, subcredential, blindedKey)
|
||||
return out
|
||||
}
|
||||
|
||||
func NewOuterLayer(content string) OuterLayer {
|
||||
l := OuterLayer{}
|
||||
l.rawContent = content
|
||||
encrypted := parseOuterLayer(content)
|
||||
l.encrypted = encrypted
|
||||
return l
|
||||
}
|
||||
|
||||
func parseOuterLayer(content string) string {
|
||||
out := ""
|
||||
lines := strings.Split(content, "\n")
|
||||
startEncrypted := false
|
||||
for _, line := range lines {
|
||||
if line == "encrypted" {
|
||||
startEncrypted = true
|
||||
continue
|
||||
}
|
||||
if startEncrypted {
|
||||
out += line + "\n"
|
||||
if line == "-----END MESSAGE-----" {
|
||||
startEncrypted = false
|
||||
out = strings.TrimSpace(out)
|
||||
}
|
||||
}
|
||||
}
|
||||
out = strings.ReplaceAll(out, "\r", "")
|
||||
out = strings.ReplaceAll(out, "\x00", "")
|
||||
return strings.TrimSpace(out)
|
||||
}
|
||||
|
||||
func outerLayerDecrypt(encrypted string, revisionCounter int64, subcredential, blindedKey ed25519.PublicKey) OuterLayer {
|
||||
plaintext := decryptLayer(encrypted, "hsdir-superencrypted-data", revisionCounter, subcredential, blindedKey)
|
||||
return NewOuterLayer(plaintext)
|
||||
}
|
||||
|
||||
func decryptLayer(encryptedBlock, constant string, revisionCounter int64, subcredential, blindedKey ed25519.PublicKey) string {
|
||||
if strings.HasPrefix(encryptedBlock, "-----BEGIN MESSAGE-----\n") &&
|
||||
strings.HasSuffix(encryptedBlock, "\n-----END MESSAGE-----") {
|
||||
encryptedBlock = strings.TrimPrefix(encryptedBlock, "-----BEGIN MESSAGE-----\n")
|
||||
encryptedBlock = strings.TrimSuffix(encryptedBlock, "\n-----END MESSAGE-----")
|
||||
}
|
||||
encrypted, err := base64.StdEncoding.DecodeString(encryptedBlock)
|
||||
if err != nil {
|
||||
panic("Unable to decode encrypted block as base64")
|
||||
}
|
||||
if len(encrypted) < SALT_LEN+MAC_LEN {
|
||||
logrus.Panicf("Encrypted block malformed (only %d bytes)", len(encrypted))
|
||||
}
|
||||
salt := encrypted[:SALT_LEN]
|
||||
ciphertext := encrypted[SALT_LEN : len(encrypted)-MAC_LEN]
|
||||
expectedMac := encrypted[len(encrypted)-MAC_LEN:]
|
||||
ciphr, macFor := layerCipher(constant, revisionCounter, subcredential, blindedKey, salt)
|
||||
|
||||
if !bytes.Equal(expectedMac, macFor(ciphertext)) {
|
||||
logrus.Panicf("Malformed mac (expected %x, but was %x)", expectedMac, macFor(ciphertext))
|
||||
}
|
||||
|
||||
plaintext := make([]byte, len(ciphertext))
|
||||
ciphr.XORKeyStream(plaintext, ciphertext)
|
||||
return string(plaintext)
|
||||
}
|
||||
|
||||
func layerCipher(constant string, revisionCounter int64, subcredential []byte, blindedKey ed25519.PublicKey, salt []byte) (cipher.Stream, func([]byte) []byte) {
|
||||
keys := make([]byte, S_KEY_LEN+S_IV_LEN+MAC_LEN)
|
||||
data1 := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(data1, uint64(revisionCounter))
|
||||
data := []byte(string(blindedKey) + string(subcredential) + string(data1) + string(salt) + constant)
|
||||
sha3.ShakeSum256(keys, data)
|
||||
|
||||
secretKey := keys[:S_KEY_LEN]
|
||||
secretIv := keys[S_KEY_LEN : S_KEY_LEN+S_IV_LEN]
|
||||
macKey := keys[S_KEY_LEN+S_IV_LEN:]
|
||||
|
||||
block, _ := aes.NewCipher(secretKey)
|
||||
ciphr := cipher.NewCTR(block, secretIv)
|
||||
//cipher = Cipher(algorithms.AES(secret_key), modes.CTR(secret_iv), default_backend())
|
||||
data2 := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(data2, uint64(len(macKey)))
|
||||
data3 := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(data3, uint64(len(salt)))
|
||||
macPrefix := string(data2) + string(macKey) + string(data3) + string(salt)
|
||||
fn := func(ciphertext []byte) []byte {
|
||||
tmp := sha3.Sum256([]byte(macPrefix + string(ciphertext)))
|
||||
return tmp[:]
|
||||
}
|
||||
return ciphr, fn
|
||||
}
|
||||
|
||||
const S_KEY_LEN = 32
|
||||
const S_IV_LEN = 16
|
||||
const SALT_LEN = 16
|
||||
const MAC_LEN = 32
|
||||
|
||||
// IdentityKeyFromAddress converts a hidden service address into its public identity key.
|
||||
func IdentityKeyFromAddress(onionAddress string) ed25519.PublicKey {
|
||||
if strings.HasSuffix(onionAddress, ".onion") {
|
||||
onionAddress = strings.TrimSuffix(onionAddress, ".onion")
|
||||
}
|
||||
decodedAddress, _ := base32.StdEncoding.DecodeString(strings.ToUpper(onionAddress))
|
||||
pubKey := decodedAddress[:32]
|
||||
expectedChecksum := decodedAddress[32:34]
|
||||
version := decodedAddress[34:35]
|
||||
checksumTmp := sha3.Sum256([]byte(".onion checksum" + string(pubKey) + string(version)))
|
||||
checksum := checksumTmp[:2]
|
||||
if !bytes.Equal(expectedChecksum, checksum) {
|
||||
logrus.Panicf("Bad checksum (expected %x but was %x)", expectedChecksum, checksum)
|
||||
}
|
||||
return pubKey
|
||||
}
|
||||
|
||||
func AddressFromIdentityKey(pub ed25519.PublicKey) string {
|
||||
var checksumBytes bytes.Buffer
|
||||
checksumBytes.Write([]byte(".onion checksum"))
|
||||
checksumBytes.Write(pub)
|
||||
checksumBytes.Write([]byte{0x03})
|
||||
checksum := sha3.Sum256(checksumBytes.Bytes())
|
||||
var onionAddressBytes bytes.Buffer
|
||||
onionAddressBytes.Write(pub)
|
||||
onionAddressBytes.Write(checksum[:2])
|
||||
onionAddressBytes.Write([]byte{0x03})
|
||||
addr := strings.ToLower(base32.StdEncoding.EncodeToString(onionAddressBytes.Bytes()))
|
||||
return addr + ".onion"
|
||||
}
|
||||
|
||||
func subcredential(identityKey, blindedKey ed25519.PublicKey) []byte {
|
||||
// credential = H('credential' | public - identity - key)
|
||||
// subcredential = H('subcredential' | credential | blinded - public - key)
|
||||
credential := sha3.Sum256([]byte("credential" + string(identityKey)))
|
||||
sub := sha3.Sum256([]byte("subcredential" + string(credential[:]) + string(blindedKey)))
|
||||
return sub[:]
|
||||
}
|
File diff suppressed because one or more lines are too long
343
endgamefiles/sourcecode/gobalance/pkg/stem/util/ed25519.go
Normal file
343
endgamefiles/sourcecode/gobalance/pkg/stem/util/ed25519.go
Normal file
@ -0,0 +1,343 @@
|
||||
package util
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/ed25519"
|
||||
"crypto/sha512"
|
||||
"fmt"
|
||||
"math/big"
|
||||
)
|
||||
|
||||
var b = 256
|
||||
var d = bi(0).Mul(bi(-121665), inv(bi(121666)))
|
||||
var d1 = biMod(biMul(bi(-121665), inv(bi(121666))), q)
|
||||
var I = expmod(bi(2), biDiv(biSub(q, bi(1)), bi(4)), q)
|
||||
var q = biSub(biExp(bi(2), bi(255)), bi(19))
|
||||
var by = biMul(bi(4), inv(bi(5)))
|
||||
var bx = xrecover(by)
|
||||
var bB = []*big.Int{biMod(bx, q), biMod(by, q)}
|
||||
var bB1 = []*big.Int{biMod(bx, q), biMod(by, q), bi(1), biMod(biMul(bx, by), q)}
|
||||
var l = biAdd(biExp(bi(2), bi(252)), biFromStr("27742317777372353535851937790883648493"))
|
||||
|
||||
func biFromStr(v string) (out *big.Int) {
|
||||
out = new(big.Int)
|
||||
_, _ = fmt.Sscan(v, out)
|
||||
return
|
||||
}
|
||||
|
||||
// BlindedSignWithTorKey this is identical to stem's hidden_service.py:_blinded_sign() but takes an
|
||||
// extended private key (i.e. in tor format) as its argument, instead of the
|
||||
// standard format that hazmat does. It basically omits the "extended the key"
|
||||
// step and does everything else the same.
|
||||
func BlindedSignWithTorKey(msg []byte, identityKey ed25519.PrivateKey, blindedKey, blindingNonce []byte) []byte {
|
||||
esk := identityKey.Seed()
|
||||
return blindedSignP2(esk, msg, blindedKey, blindingNonce)
|
||||
}
|
||||
|
||||
func BlindedSign(msg, identityKey, blindedKey, blindingNonce []byte) []byte {
|
||||
identityKeyBytes := identityKey
|
||||
|
||||
// pad private identity key into an ESK (encrypted secret key)
|
||||
|
||||
tmp := sha512.Sum512(identityKeyBytes)
|
||||
h := tmp[:]
|
||||
sum := bi(0)
|
||||
for i := int64(3); i < int64(b)-2; i++ {
|
||||
sum = biAdd(sum, biMul(biExp(bi(2), bi(i)), bi(int64(Bit(h, i)))))
|
||||
}
|
||||
a := biAdd(biExp(bi(2), bi(int64(b-2))), sum)
|
||||
tmpS := make([][]byte, 0)
|
||||
for i := b / 8; i < b/4; i++ {
|
||||
tmpS = append(tmpS, h[i:i+1])
|
||||
}
|
||||
k := bytes.Join(tmpS, []byte(""))
|
||||
esk := append(encodeint(a), k...)
|
||||
|
||||
return blindedSignP2(esk, msg, blindedKey, blindingNonce)
|
||||
}
|
||||
|
||||
func blindedSignP2(esk, msg, blindedKey, blindingNonce []byte) []byte {
|
||||
// blind the ESK with this nonce
|
||||
sum := bi(0)
|
||||
for i := int64(3); i < int64(b-2); i++ {
|
||||
bitRes := bi(int64(Bit(blindingNonce, i)))
|
||||
sum = biAdd(sum, biMul(biExp(bi(2), bi(i)), bitRes))
|
||||
}
|
||||
mult := biAdd(biExp(bi(2), bi(int64(b-2))), sum)
|
||||
s := decodeInt(esk[:32])
|
||||
sPrime := biMod(biMul(s, mult), l)
|
||||
k := esk[32:]
|
||||
tmp := sha512.Sum512([]byte("Derive temporary signing key hash input" + string(k)))
|
||||
kPrime := tmp[:32]
|
||||
blindedEsk := append(encodeint(sPrime), kPrime...)
|
||||
|
||||
// finally, sign the message
|
||||
|
||||
a := decodeInt(blindedEsk[:32])
|
||||
lines := make([][]byte, 0)
|
||||
for i := b / 8; i < b/4; i++ {
|
||||
lines = append(lines, blindedEsk[i:i+1])
|
||||
}
|
||||
toHint := append(bytes.Join(lines, []byte("")), msg...)
|
||||
r := hint(toHint)
|
||||
R := Scalarmult1(bB1, r)
|
||||
S := biMod(biAdd(r, biMul(hint([]byte(string(Encodepoint(R))+string(blindedKey)+string(msg))), a)), l)
|
||||
|
||||
return append(Encodepoint(R), encodeint(S)...)
|
||||
}
|
||||
|
||||
func hint(m []byte) *big.Int {
|
||||
tmp := sha512.Sum512(m)
|
||||
h := tmp[:]
|
||||
sum := bi(0)
|
||||
for i := 0; i < 2*b; i++ {
|
||||
sum = biAdd(sum, biMul(biExp(bi(2), bi(int64(i))), bi(int64(Bit(h, int64(i))))))
|
||||
}
|
||||
return sum
|
||||
}
|
||||
|
||||
//def Hint(m):
|
||||
//h = H(m)
|
||||
//return sum(2 ** i * bit(h, i) for i in range(2 * b))
|
||||
|
||||
func BlindedPubkey(identityKey ed25519.PublicKey, blindingNonce []byte) ed25519.PublicKey {
|
||||
ed25519b := int64(256)
|
||||
sum := bi(0)
|
||||
for i := int64(3); i < ed25519b-2; i++ {
|
||||
sum = biAdd(sum, biMul(biExp(bi(2), bi(i)), bi(int64(Bit(blindingNonce, i)))))
|
||||
}
|
||||
mult := biAdd(biExp(bi(2), bi(ed25519b-2)), sum)
|
||||
P := Decodepoint(identityKey)
|
||||
return Encodepoint(Scalarmult1(P, mult))
|
||||
}
|
||||
|
||||
func Decodepoint(s []byte) []*big.Int {
|
||||
sum := bi(0)
|
||||
for i := 0; i < b-1; i++ {
|
||||
sum = biAdd(sum, biMul(biExp(bi(2), bi(int64(i))), bi(int64(Bit(s, int64(i))))))
|
||||
}
|
||||
y := sum
|
||||
x := xrecover(y)
|
||||
if biAnd(x, bi(1)).Cmp(bi(int64(Bit(s, int64(b-1))))) != 0 {
|
||||
x = biSub(q, x)
|
||||
}
|
||||
P := []*big.Int{x, y, bi(1), biMod(biMul(x, y), q)}
|
||||
if !isoncurve(P) {
|
||||
panic("decoding point that is not on curve")
|
||||
}
|
||||
return P
|
||||
}
|
||||
|
||||
func decodeInt(s []uint8) *big.Int {
|
||||
sum := bi(0)
|
||||
for i := 0; i < 256; i++ {
|
||||
tmpI := bi(int64(i))
|
||||
base := bi(2)
|
||||
e := bi(0).Exp(base, tmpI, nil)
|
||||
m := bi(int64(Bit(s, int64(i))))
|
||||
tmp := bi(0).Mul(e, m)
|
||||
sum = sum.Add(sum, tmp)
|
||||
}
|
||||
return sum
|
||||
}
|
||||
|
||||
func encodeint(y *big.Int) []byte {
|
||||
bits := make([]*big.Int, 0)
|
||||
for i := 0; i < b; i++ {
|
||||
bits = append(bits, biAnd(biRsh(y, uint(i)), bi(1)))
|
||||
}
|
||||
final := make([]byte, 0)
|
||||
for i := 0; i < b/8; i++ {
|
||||
sum := bi(0)
|
||||
for j := 0; j < 8; j++ {
|
||||
sum = biAdd(sum, biLsh(bits[i*8+j], uint(j)))
|
||||
}
|
||||
final = append(final, byte(sum.Uint64()))
|
||||
}
|
||||
return final
|
||||
}
|
||||
|
||||
func xrecover(y *big.Int) *big.Int {
|
||||
xx := biMul(biSub(biMul(y, y), bi(1)), inv(biAdd(biMul(biMul(d, y), y), bi(1))))
|
||||
x := expmod(xx, biDiv(biAdd(q, bi(3)), bi(8)), q)
|
||||
if biMod(biSub(biMul(x, x), xx), q).Int64() != 0 {
|
||||
x = biMod(biMul(x, I), q)
|
||||
}
|
||||
if biMod(x, bi(2)).Int64() != 0 {
|
||||
x = biSub(q, x)
|
||||
}
|
||||
return x
|
||||
}
|
||||
|
||||
func expmod(b, e, m *big.Int) *big.Int {
|
||||
if e.Cmp(bi(0)) == 0 {
|
||||
return bi(1)
|
||||
}
|
||||
t := biMod(biExp(expmod(b, biDiv(e, bi(2)), m), bi(2)), m)
|
||||
if biAnd(e, bi(1)).Int64() == 1 {
|
||||
t = biMod(biMul(t, b), m)
|
||||
}
|
||||
return t
|
||||
}
|
||||
|
||||
func Bit(h []uint8, i int64) uint8 {
|
||||
return (h[i/8] >> (i % 8)) & 1
|
||||
}
|
||||
|
||||
func inv(x *big.Int) *big.Int {
|
||||
return expmod(x, biSub(q, bi(2)), q)
|
||||
}
|
||||
|
||||
func isoncurve(P []*big.Int) bool {
|
||||
var d = biMod(biMul(bi(-121665), inv(bi(121666))), q)
|
||||
var q = biSub(biExp(bi(2), bi(255)), bi(19))
|
||||
x := P[0]
|
||||
y := P[1]
|
||||
z := P[2]
|
||||
t := P[3]
|
||||
return biMod(z, q).Cmp(bi(0)) != 0 &&
|
||||
biMod(biMul(x, y), q).Cmp(biMod(biMul(z, t), q)) == 0 &&
|
||||
biMod(biSub(biSub(biSub(biMul(y, y), biMul(x, x)), biMul(z, z)), biMul(biMul(d, t), t)), q).Int64() == 0
|
||||
}
|
||||
|
||||
func edwardsAdd(P, Q []*big.Int) []*big.Int {
|
||||
// This is formula sequence 'addition-add-2008-hwcd-3' from
|
||||
// http://www.hyperelliptic.org/EFD/g1p/auto-twisted-extended-1.html
|
||||
x1 := P[0]
|
||||
y1 := P[1]
|
||||
z1 := P[2]
|
||||
t1 := P[3]
|
||||
x2 := Q[0]
|
||||
y2 := Q[1]
|
||||
z2 := Q[2]
|
||||
t2 := Q[3]
|
||||
a := biMod(biMul(biSub(y1, x1), biSub(y2, x2)), q)
|
||||
b := biMod(biMul(biAdd(y1, x1), biAdd(y2, x2)), q)
|
||||
c := biMod(biMul(biMul(biMul(t1, bi(2)), d1), t2), q)
|
||||
dd := biMod(biMul(biMul(z1, bi(2)), z2), q)
|
||||
e := biSub(b, a)
|
||||
f := biSub(dd, c)
|
||||
g := biAdd(dd, c)
|
||||
h := biAdd(b, a)
|
||||
x3 := biMul(e, f)
|
||||
y3 := biMul(g, h)
|
||||
t3 := biMul(e, h)
|
||||
z3 := biMul(f, g)
|
||||
return []*big.Int{biMod(x3, q), biMod(y3, q), biMod(z3, q), biMod(t3, q)}
|
||||
}
|
||||
|
||||
func edwardsDouble(P []*big.Int) []*big.Int {
|
||||
// This is formula sequence 'dbl-2008-hwcd' from
|
||||
// http://www.hyperelliptic.org/EFD/g1p/auto-twisted-extended-1.html
|
||||
x1 := P[0]
|
||||
y1 := P[1]
|
||||
z1 := P[2]
|
||||
a := biMod(biMul(x1, x1), q)
|
||||
b := biMod(biMul(y1, y1), q)
|
||||
c := biMod(biMul(biMul(bi(2), z1), z1), q)
|
||||
e := biMod(biSub(biSub(biMul(biAdd(x1, y1), biAdd(x1, y1)), a), b), q)
|
||||
g := biAdd(biMul(a, bi(-1)), b)
|
||||
f := biSub(g, c)
|
||||
h := biSub(biMul(a, bi(-1)), b)
|
||||
x3 := biMul(e, f)
|
||||
y3 := biMul(g, h)
|
||||
t3 := biMul(e, h)
|
||||
z3 := biMul(f, g)
|
||||
return []*big.Int{biMod(x3, q), biMod(y3, q), biMod(z3, q), biMod(t3, q)}
|
||||
}
|
||||
|
||||
func Scalarmult1(P []*big.Int, e *big.Int) []*big.Int {
|
||||
if e.Cmp(bi(0)) == 0 {
|
||||
return []*big.Int{bi(0), bi(1), bi(1), bi(0)}
|
||||
}
|
||||
Q := Scalarmult1(P, biDiv(e, bi(2)))
|
||||
Q = edwardsDouble(Q)
|
||||
if biAnd(e, bi(1)).Int64() == 1 {
|
||||
//if e.And(e, bi(1)).Int64() == 1 {
|
||||
Q = edwardsAdd(Q, P)
|
||||
}
|
||||
return Q
|
||||
}
|
||||
|
||||
func Encodepoint(P []*big.Int) []byte {
|
||||
x := P[0]
|
||||
y := P[1]
|
||||
z := P[2]
|
||||
//t := P[3]
|
||||
zi := inv(z)
|
||||
x = biMod(biMul(x, zi), q)
|
||||
y = biMod(biMul(y, zi), q)
|
||||
bits := make([]uint8, 0)
|
||||
for i := 0; i < b-1; i++ {
|
||||
bits = append(bits, uint8(biAnd(biRsh(y, uint(i)), bi(1)).Int64()))
|
||||
}
|
||||
bits = append(bits, uint8(biAnd(x, bi(1)).Int64()))
|
||||
by := make([]uint8, 0)
|
||||
for i := 0; i < b/8; i++ {
|
||||
sum := uint8(0)
|
||||
for j := 0; j < 8; j++ {
|
||||
sum += bits[i*8+j] << j
|
||||
}
|
||||
by = append(by, sum)
|
||||
}
|
||||
return by
|
||||
}
|
||||
|
||||
//func Encodepoint(P []*big.Int) []byte {
|
||||
// x := P[0]
|
||||
// y := P[1]
|
||||
// bits := make([]uint8, 0)
|
||||
// for i := 0; i < b; i++ {
|
||||
// bits = append(bits, uint8(biAnd(biRsh(y, uint(i)), bi(1)).Int64()))
|
||||
// }
|
||||
// by := make([]uint8, 0)
|
||||
// bits = append(bits, uint8(biAnd(x, bi(1)).Int64()))
|
||||
// for i := 0; i < b/8; i++ {
|
||||
// sum := uint8(0)
|
||||
// for j := 0; j < 8; j++ {
|
||||
// sum += bits[i*8+j] << j
|
||||
// }
|
||||
// by = append(by, sum)
|
||||
// }
|
||||
// return by
|
||||
//}
|
||||
|
||||
func bi(v int64) *big.Int {
|
||||
return big.NewInt(v)
|
||||
}
|
||||
|
||||
func biExp(a, b *big.Int) *big.Int {
|
||||
return bi(0).Exp(a, b, nil)
|
||||
}
|
||||
|
||||
func biDiv(a, b *big.Int) *big.Int {
|
||||
return bi(0).Div(a, b)
|
||||
}
|
||||
|
||||
func biSub(a, b *big.Int) *big.Int {
|
||||
return bi(0).Sub(a, b)
|
||||
}
|
||||
|
||||
func biAdd(a, b *big.Int) *big.Int {
|
||||
return bi(0).Add(a, b)
|
||||
}
|
||||
|
||||
func biAnd(a, b *big.Int) *big.Int {
|
||||
return bi(0).And(a, b)
|
||||
}
|
||||
|
||||
func biRsh(a *big.Int, b uint) *big.Int {
|
||||
return bi(0).Rsh(a, b)
|
||||
}
|
||||
|
||||
func biLsh(a *big.Int, b uint) *big.Int {
|
||||
return bi(0).Lsh(a, b)
|
||||
}
|
||||
|
||||
func biMul(a, b *big.Int) *big.Int {
|
||||
return bi(0).Mul(a, b)
|
||||
}
|
||||
|
||||
func biMod(a, b *big.Int) *big.Int {
|
||||
return bi(0).Mod(a, b)
|
||||
}
|
@ -0,0 +1,25 @@
|
||||
package util
|
||||
|
||||
import (
|
||||
"crypto/ed25519"
|
||||
"crypto/x509"
|
||||
"encoding/base64"
|
||||
"encoding/pem"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestBlindedSign(t *testing.T) {
|
||||
msg, _ := base64.StdEncoding.DecodeString(`AQgABvn+AUmtuF1+Nb/kJ67y1U0lI7HiDjRJwHHY+sQrHlBKomR3AQAgBAAtL5DBE1Moh7A+AGrzgWhcHOBo/W3lyhcLeip0LuI8Xw==`)
|
||||
identityKeyPem := `-----BEGIN PRIVATE KEY-----
|
||||
MC4CAQAwBQYDK2VwBCIEIMjdAAyeb8pU3CzRK2z+yKSgWi0R33mfeAPpVnktRrwA
|
||||
-----END PRIVATE KEY-----`
|
||||
block, _ := pem.Decode([]byte(identityKeyPem))
|
||||
key, _ := x509.ParsePKCS8PrivateKey(block.Bytes)
|
||||
identityKey := key.(ed25519.PrivateKey)
|
||||
blindedKey, _ := base64.StdEncoding.DecodeString(`LS+QwRNTKIewPgBq84FoXBzgaP1t5coXC3oqdC7iPF8=`)
|
||||
blindingNonce, _ := base64.StdEncoding.DecodeString(`ljbKEFzZGbd3ZI29J67XTs6JV3Glp+uieQ5yORMhmdg=`)
|
||||
expected := `xIrhGFs3VZKbV36zqCcudaWN0+K8s6zRRr5qki1uz/HjBL80SQ0HEirDp4DnNBAeYDIjNJwmrgQe6IU8ESHzDg==`
|
||||
res := BlindedSign(msg, identityKey.Seed(), blindedKey, blindingNonce)
|
||||
assert.Equal(t, expected, base64.StdEncoding.EncodeToString(res))
|
||||
}
|
0
endgamefiles/sourcecode/gobalance/pkg/test
Normal file
0
endgamefiles/sourcecode/gobalance/pkg/test
Normal file
4
endgamefiles/sourcecode/gobalance/torrc
Normal file
4
endgamefiles/sourcecode/gobalance/torrc
Normal file
@ -0,0 +1,4 @@
|
||||
RunAsDaemon 0
|
||||
ControlPort 9051
|
||||
DataDirectory torfiles
|
||||
CookieAuthentication 1
|
21
endgamefiles/sourcecode/gobalance/vendor/github.com/cpuguy83/go-md2man/v2/LICENSE.md
generated
vendored
Normal file
21
endgamefiles/sourcecode/gobalance/vendor/github.com/cpuguy83/go-md2man/v2/LICENSE.md
generated
vendored
Normal file
@ -0,0 +1,21 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2014 Brian Goff
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
14
endgamefiles/sourcecode/gobalance/vendor/github.com/cpuguy83/go-md2man/v2/md2man/md2man.go
generated
vendored
Normal file
14
endgamefiles/sourcecode/gobalance/vendor/github.com/cpuguy83/go-md2man/v2/md2man/md2man.go
generated
vendored
Normal file
@ -0,0 +1,14 @@
|
||||
package md2man
|
||||
|
||||
import (
|
||||
"github.com/russross/blackfriday/v2"
|
||||
)
|
||||
|
||||
// Render converts a markdown document into a roff formatted document.
|
||||
func Render(doc []byte) []byte {
|
||||
renderer := NewRoffRenderer()
|
||||
|
||||
return blackfriday.Run(doc,
|
||||
[]blackfriday.Option{blackfriday.WithRenderer(renderer),
|
||||
blackfriday.WithExtensions(renderer.GetExtensions())}...)
|
||||
}
|
336
endgamefiles/sourcecode/gobalance/vendor/github.com/cpuguy83/go-md2man/v2/md2man/roff.go
generated
vendored
Normal file
336
endgamefiles/sourcecode/gobalance/vendor/github.com/cpuguy83/go-md2man/v2/md2man/roff.go
generated
vendored
Normal file
@ -0,0 +1,336 @@
|
||||
package md2man
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/russross/blackfriday/v2"
|
||||
)
|
||||
|
||||
// roffRenderer implements the blackfriday.Renderer interface for creating
|
||||
// roff format (manpages) from markdown text
|
||||
type roffRenderer struct {
|
||||
extensions blackfriday.Extensions
|
||||
listCounters []int
|
||||
firstHeader bool
|
||||
firstDD bool
|
||||
listDepth int
|
||||
}
|
||||
|
||||
const (
|
||||
titleHeader = ".TH "
|
||||
topLevelHeader = "\n\n.SH "
|
||||
secondLevelHdr = "\n.SH "
|
||||
otherHeader = "\n.SS "
|
||||
crTag = "\n"
|
||||
emphTag = "\\fI"
|
||||
emphCloseTag = "\\fP"
|
||||
strongTag = "\\fB"
|
||||
strongCloseTag = "\\fP"
|
||||
breakTag = "\n.br\n"
|
||||
paraTag = "\n.PP\n"
|
||||
hruleTag = "\n.ti 0\n\\l'\\n(.lu'\n"
|
||||
linkTag = "\n\\[la]"
|
||||
linkCloseTag = "\\[ra]"
|
||||
codespanTag = "\\fB\\fC"
|
||||
codespanCloseTag = "\\fR"
|
||||
codeTag = "\n.PP\n.RS\n\n.nf\n"
|
||||
codeCloseTag = "\n.fi\n.RE\n"
|
||||
quoteTag = "\n.PP\n.RS\n"
|
||||
quoteCloseTag = "\n.RE\n"
|
||||
listTag = "\n.RS\n"
|
||||
listCloseTag = "\n.RE\n"
|
||||
dtTag = "\n.TP\n"
|
||||
dd2Tag = "\n"
|
||||
tableStart = "\n.TS\nallbox;\n"
|
||||
tableEnd = ".TE\n"
|
||||
tableCellStart = "T{\n"
|
||||
tableCellEnd = "\nT}\n"
|
||||
)
|
||||
|
||||
// NewRoffRenderer creates a new blackfriday Renderer for generating roff documents
|
||||
// from markdown
|
||||
func NewRoffRenderer() *roffRenderer { // nolint: golint
|
||||
var extensions blackfriday.Extensions
|
||||
|
||||
extensions |= blackfriday.NoIntraEmphasis
|
||||
extensions |= blackfriday.Tables
|
||||
extensions |= blackfriday.FencedCode
|
||||
extensions |= blackfriday.SpaceHeadings
|
||||
extensions |= blackfriday.Footnotes
|
||||
extensions |= blackfriday.Titleblock
|
||||
extensions |= blackfriday.DefinitionLists
|
||||
return &roffRenderer{
|
||||
extensions: extensions,
|
||||
}
|
||||
}
|
||||
|
||||
// GetExtensions returns the list of extensions used by this renderer implementation
|
||||
func (r *roffRenderer) GetExtensions() blackfriday.Extensions {
|
||||
return r.extensions
|
||||
}
|
||||
|
||||
// RenderHeader handles outputting the header at document start
|
||||
func (r *roffRenderer) RenderHeader(w io.Writer, ast *blackfriday.Node) {
|
||||
// disable hyphenation
|
||||
out(w, ".nh\n")
|
||||
}
|
||||
|
||||
// RenderFooter handles outputting the footer at the document end; the roff
|
||||
// renderer has no footer information
|
||||
func (r *roffRenderer) RenderFooter(w io.Writer, ast *blackfriday.Node) {
|
||||
}
|
||||
|
||||
// RenderNode is called for each node in a markdown document; based on the node
|
||||
// type the equivalent roff output is sent to the writer
|
||||
func (r *roffRenderer) RenderNode(w io.Writer, node *blackfriday.Node, entering bool) blackfriday.WalkStatus {
|
||||
|
||||
var walkAction = blackfriday.GoToNext
|
||||
|
||||
switch node.Type {
|
||||
case blackfriday.Text:
|
||||
escapeSpecialChars(w, node.Literal)
|
||||
case blackfriday.Softbreak:
|
||||
out(w, crTag)
|
||||
case blackfriday.Hardbreak:
|
||||
out(w, breakTag)
|
||||
case blackfriday.Emph:
|
||||
if entering {
|
||||
out(w, emphTag)
|
||||
} else {
|
||||
out(w, emphCloseTag)
|
||||
}
|
||||
case blackfriday.Strong:
|
||||
if entering {
|
||||
out(w, strongTag)
|
||||
} else {
|
||||
out(w, strongCloseTag)
|
||||
}
|
||||
case blackfriday.Link:
|
||||
if !entering {
|
||||
out(w, linkTag+string(node.LinkData.Destination)+linkCloseTag)
|
||||
}
|
||||
case blackfriday.Image:
|
||||
// ignore images
|
||||
walkAction = blackfriday.SkipChildren
|
||||
case blackfriday.Code:
|
||||
out(w, codespanTag)
|
||||
escapeSpecialChars(w, node.Literal)
|
||||
out(w, codespanCloseTag)
|
||||
case blackfriday.Document:
|
||||
break
|
||||
case blackfriday.Paragraph:
|
||||
// roff .PP markers break lists
|
||||
if r.listDepth > 0 {
|
||||
return blackfriday.GoToNext
|
||||
}
|
||||
if entering {
|
||||
out(w, paraTag)
|
||||
} else {
|
||||
out(w, crTag)
|
||||
}
|
||||
case blackfriday.BlockQuote:
|
||||
if entering {
|
||||
out(w, quoteTag)
|
||||
} else {
|
||||
out(w, quoteCloseTag)
|
||||
}
|
||||
case blackfriday.Heading:
|
||||
r.handleHeading(w, node, entering)
|
||||
case blackfriday.HorizontalRule:
|
||||
out(w, hruleTag)
|
||||
case blackfriday.List:
|
||||
r.handleList(w, node, entering)
|
||||
case blackfriday.Item:
|
||||
r.handleItem(w, node, entering)
|
||||
case blackfriday.CodeBlock:
|
||||
out(w, codeTag)
|
||||
escapeSpecialChars(w, node.Literal)
|
||||
out(w, codeCloseTag)
|
||||
case blackfriday.Table:
|
||||
r.handleTable(w, node, entering)
|
||||
case blackfriday.TableHead:
|
||||
case blackfriday.TableBody:
|
||||
case blackfriday.TableRow:
|
||||
// no action as cell entries do all the nroff formatting
|
||||
return blackfriday.GoToNext
|
||||
case blackfriday.TableCell:
|
||||
r.handleTableCell(w, node, entering)
|
||||
case blackfriday.HTMLSpan:
|
||||
// ignore other HTML tags
|
||||
default:
|
||||
fmt.Fprintln(os.Stderr, "WARNING: go-md2man does not handle node type "+node.Type.String())
|
||||
}
|
||||
return walkAction
|
||||
}
|
||||
|
||||
func (r *roffRenderer) handleHeading(w io.Writer, node *blackfriday.Node, entering bool) {
|
||||
if entering {
|
||||
switch node.Level {
|
||||
case 1:
|
||||
if !r.firstHeader {
|
||||
out(w, titleHeader)
|
||||
r.firstHeader = true
|
||||
break
|
||||
}
|
||||
out(w, topLevelHeader)
|
||||
case 2:
|
||||
out(w, secondLevelHdr)
|
||||
default:
|
||||
out(w, otherHeader)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (r *roffRenderer) handleList(w io.Writer, node *blackfriday.Node, entering bool) {
|
||||
openTag := listTag
|
||||
closeTag := listCloseTag
|
||||
if node.ListFlags&blackfriday.ListTypeDefinition != 0 {
|
||||
// tags for definition lists handled within Item node
|
||||
openTag = ""
|
||||
closeTag = ""
|
||||
}
|
||||
if entering {
|
||||
r.listDepth++
|
||||
if node.ListFlags&blackfriday.ListTypeOrdered != 0 {
|
||||
r.listCounters = append(r.listCounters, 1)
|
||||
}
|
||||
out(w, openTag)
|
||||
} else {
|
||||
if node.ListFlags&blackfriday.ListTypeOrdered != 0 {
|
||||
r.listCounters = r.listCounters[:len(r.listCounters)-1]
|
||||
}
|
||||
out(w, closeTag)
|
||||
r.listDepth--
|
||||
}
|
||||
}
|
||||
|
||||
func (r *roffRenderer) handleItem(w io.Writer, node *blackfriday.Node, entering bool) {
|
||||
if entering {
|
||||
if node.ListFlags&blackfriday.ListTypeOrdered != 0 {
|
||||
out(w, fmt.Sprintf(".IP \"%3d.\" 5\n", r.listCounters[len(r.listCounters)-1]))
|
||||
r.listCounters[len(r.listCounters)-1]++
|
||||
} else if node.ListFlags&blackfriday.ListTypeTerm != 0 {
|
||||
// DT (definition term): line just before DD (see below).
|
||||
out(w, dtTag)
|
||||
r.firstDD = true
|
||||
} else if node.ListFlags&blackfriday.ListTypeDefinition != 0 {
|
||||
// DD (definition description): line that starts with ": ".
|
||||
//
|
||||
// We have to distinguish between the first DD and the
|
||||
// subsequent ones, as there should be no vertical
|
||||
// whitespace between the DT and the first DD.
|
||||
if r.firstDD {
|
||||
r.firstDD = false
|
||||
} else {
|
||||
out(w, dd2Tag)
|
||||
}
|
||||
} else {
|
||||
out(w, ".IP \\(bu 2\n")
|
||||
}
|
||||
} else {
|
||||
out(w, "\n")
|
||||
}
|
||||
}
|
||||
|
||||
func (r *roffRenderer) handleTable(w io.Writer, node *blackfriday.Node, entering bool) {
|
||||
if entering {
|
||||
out(w, tableStart)
|
||||
// call walker to count cells (and rows?) so format section can be produced
|
||||
columns := countColumns(node)
|
||||
out(w, strings.Repeat("l ", columns)+"\n")
|
||||
out(w, strings.Repeat("l ", columns)+".\n")
|
||||
} else {
|
||||
out(w, tableEnd)
|
||||
}
|
||||
}
|
||||
|
||||
func (r *roffRenderer) handleTableCell(w io.Writer, node *blackfriday.Node, entering bool) {
|
||||
if entering {
|
||||
var start string
|
||||
if node.Prev != nil && node.Prev.Type == blackfriday.TableCell {
|
||||
start = "\t"
|
||||
}
|
||||
if node.IsHeader {
|
||||
start += codespanTag
|
||||
} else if nodeLiteralSize(node) > 30 {
|
||||
start += tableCellStart
|
||||
}
|
||||
out(w, start)
|
||||
} else {
|
||||
var end string
|
||||
if node.IsHeader {
|
||||
end = codespanCloseTag
|
||||
} else if nodeLiteralSize(node) > 30 {
|
||||
end = tableCellEnd
|
||||
}
|
||||
if node.Next == nil && end != tableCellEnd {
|
||||
// Last cell: need to carriage return if we are at the end of the
|
||||
// header row and content isn't wrapped in a "tablecell"
|
||||
end += crTag
|
||||
}
|
||||
out(w, end)
|
||||
}
|
||||
}
|
||||
|
||||
func nodeLiteralSize(node *blackfriday.Node) int {
|
||||
total := 0
|
||||
for n := node.FirstChild; n != nil; n = n.FirstChild {
|
||||
total += len(n.Literal)
|
||||
}
|
||||
return total
|
||||
}
|
||||
|
||||
// because roff format requires knowing the column count before outputting any table
|
||||
// data we need to walk a table tree and count the columns
|
||||
func countColumns(node *blackfriday.Node) int {
|
||||
var columns int
|
||||
|
||||
node.Walk(func(node *blackfriday.Node, entering bool) blackfriday.WalkStatus {
|
||||
switch node.Type {
|
||||
case blackfriday.TableRow:
|
||||
if !entering {
|
||||
return blackfriday.Terminate
|
||||
}
|
||||
case blackfriday.TableCell:
|
||||
if entering {
|
||||
columns++
|
||||
}
|
||||
default:
|
||||
}
|
||||
return blackfriday.GoToNext
|
||||
})
|
||||
return columns
|
||||
}
|
||||
|
||||
func out(w io.Writer, output string) {
|
||||
io.WriteString(w, output) // nolint: errcheck
|
||||
}
|
||||
|
||||
func escapeSpecialChars(w io.Writer, text []byte) {
|
||||
for i := 0; i < len(text); i++ {
|
||||
// escape initial apostrophe or period
|
||||
if len(text) >= 1 && (text[0] == '\'' || text[0] == '.') {
|
||||
out(w, "\\&")
|
||||
}
|
||||
|
||||
// directly copy normal characters
|
||||
org := i
|
||||
|
||||
for i < len(text) && text[i] != '\\' {
|
||||
i++
|
||||
}
|
||||
if i > org {
|
||||
w.Write(text[org:i]) // nolint: errcheck
|
||||
}
|
||||
|
||||
// escape a character
|
||||
if i >= len(text) {
|
||||
break
|
||||
}
|
||||
|
||||
w.Write([]byte{'\\', text[i]}) // nolint: errcheck
|
||||
}
|
||||
}
|
15
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/LICENSE
generated
vendored
Normal file
15
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/LICENSE
generated
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
ISC License
|
||||
|
||||
Copyright (c) 2012-2016 Dave Collins <dave@davec.name>
|
||||
|
||||
Permission to use, copy, modify, and/or distribute this software for any
|
||||
purpose with or without fee is hereby granted, provided that the above
|
||||
copyright notice and this permission notice appear in all copies.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
145
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/bypass.go
generated
vendored
Normal file
145
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/bypass.go
generated
vendored
Normal file
@ -0,0 +1,145 @@
|
||||
// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
|
||||
//
|
||||
// Permission to use, copy, modify, and distribute this software for any
|
||||
// purpose with or without fee is hereby granted, provided that the above
|
||||
// copyright notice and this permission notice appear in all copies.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
// NOTE: Due to the following build constraints, this file will only be compiled
|
||||
// when the code is not running on Google App Engine, compiled by GopherJS, and
|
||||
// "-tags safe" is not added to the go build command line. The "disableunsafe"
|
||||
// tag is deprecated and thus should not be used.
|
||||
// Go versions prior to 1.4 are disabled because they use a different layout
|
||||
// for interfaces which make the implementation of unsafeReflectValue more complex.
|
||||
// +build !js,!appengine,!safe,!disableunsafe,go1.4
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
const (
|
||||
// UnsafeDisabled is a build-time constant which specifies whether or
|
||||
// not access to the unsafe package is available.
|
||||
UnsafeDisabled = false
|
||||
|
||||
// ptrSize is the size of a pointer on the current arch.
|
||||
ptrSize = unsafe.Sizeof((*byte)(nil))
|
||||
)
|
||||
|
||||
type flag uintptr
|
||||
|
||||
var (
|
||||
// flagRO indicates whether the value field of a reflect.Value
|
||||
// is read-only.
|
||||
flagRO flag
|
||||
|
||||
// flagAddr indicates whether the address of the reflect.Value's
|
||||
// value may be taken.
|
||||
flagAddr flag
|
||||
)
|
||||
|
||||
// flagKindMask holds the bits that make up the kind
|
||||
// part of the flags field. In all the supported versions,
|
||||
// it is in the lower 5 bits.
|
||||
const flagKindMask = flag(0x1f)
|
||||
|
||||
// Different versions of Go have used different
|
||||
// bit layouts for the flags type. This table
|
||||
// records the known combinations.
|
||||
var okFlags = []struct {
|
||||
ro, addr flag
|
||||
}{{
|
||||
// From Go 1.4 to 1.5
|
||||
ro: 1 << 5,
|
||||
addr: 1 << 7,
|
||||
}, {
|
||||
// Up to Go tip.
|
||||
ro: 1<<5 | 1<<6,
|
||||
addr: 1 << 8,
|
||||
}}
|
||||
|
||||
var flagValOffset = func() uintptr {
|
||||
field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
|
||||
if !ok {
|
||||
panic("reflect.Value has no flag field")
|
||||
}
|
||||
return field.Offset
|
||||
}()
|
||||
|
||||
// flagField returns a pointer to the flag field of a reflect.Value.
|
||||
func flagField(v *reflect.Value) *flag {
|
||||
return (*flag)(unsafe.Pointer(uintptr(unsafe.Pointer(v)) + flagValOffset))
|
||||
}
|
||||
|
||||
// unsafeReflectValue converts the passed reflect.Value into a one that bypasses
|
||||
// the typical safety restrictions preventing access to unaddressable and
|
||||
// unexported data. It works by digging the raw pointer to the underlying
|
||||
// value out of the protected value and generating a new unprotected (unsafe)
|
||||
// reflect.Value to it.
|
||||
//
|
||||
// This allows us to check for implementations of the Stringer and error
|
||||
// interfaces to be used for pretty printing ordinarily unaddressable and
|
||||
// inaccessible values such as unexported struct fields.
|
||||
func unsafeReflectValue(v reflect.Value) reflect.Value {
|
||||
if !v.IsValid() || (v.CanInterface() && v.CanAddr()) {
|
||||
return v
|
||||
}
|
||||
flagFieldPtr := flagField(&v)
|
||||
*flagFieldPtr &^= flagRO
|
||||
*flagFieldPtr |= flagAddr
|
||||
return v
|
||||
}
|
||||
|
||||
// Sanity checks against future reflect package changes
|
||||
// to the type or semantics of the Value.flag field.
|
||||
func init() {
|
||||
field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
|
||||
if !ok {
|
||||
panic("reflect.Value has no flag field")
|
||||
}
|
||||
if field.Type.Kind() != reflect.TypeOf(flag(0)).Kind() {
|
||||
panic("reflect.Value flag field has changed kind")
|
||||
}
|
||||
type t0 int
|
||||
var t struct {
|
||||
A t0
|
||||
// t0 will have flagEmbedRO set.
|
||||
t0
|
||||
// a will have flagStickyRO set
|
||||
a t0
|
||||
}
|
||||
vA := reflect.ValueOf(t).FieldByName("A")
|
||||
va := reflect.ValueOf(t).FieldByName("a")
|
||||
vt0 := reflect.ValueOf(t).FieldByName("t0")
|
||||
|
||||
// Infer flagRO from the difference between the flags
|
||||
// for the (otherwise identical) fields in t.
|
||||
flagPublic := *flagField(&vA)
|
||||
flagWithRO := *flagField(&va) | *flagField(&vt0)
|
||||
flagRO = flagPublic ^ flagWithRO
|
||||
|
||||
// Infer flagAddr from the difference between a value
|
||||
// taken from a pointer and not.
|
||||
vPtrA := reflect.ValueOf(&t).Elem().FieldByName("A")
|
||||
flagNoPtr := *flagField(&vA)
|
||||
flagPtr := *flagField(&vPtrA)
|
||||
flagAddr = flagNoPtr ^ flagPtr
|
||||
|
||||
// Check that the inferred flags tally with one of the known versions.
|
||||
for _, f := range okFlags {
|
||||
if flagRO == f.ro && flagAddr == f.addr {
|
||||
return
|
||||
}
|
||||
}
|
||||
panic("reflect.Value read-only flag has changed semantics")
|
||||
}
|
38
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go
generated
vendored
Normal file
38
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/bypasssafe.go
generated
vendored
Normal file
@ -0,0 +1,38 @@
|
||||
// Copyright (c) 2015-2016 Dave Collins <dave@davec.name>
|
||||
//
|
||||
// Permission to use, copy, modify, and distribute this software for any
|
||||
// purpose with or without fee is hereby granted, provided that the above
|
||||
// copyright notice and this permission notice appear in all copies.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
// WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
// MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
// ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
// WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
// ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
// OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
|
||||
// NOTE: Due to the following build constraints, this file will only be compiled
|
||||
// when the code is running on Google App Engine, compiled by GopherJS, or
|
||||
// "-tags safe" is added to the go build command line. The "disableunsafe"
|
||||
// tag is deprecated and thus should not be used.
|
||||
// +build js appengine safe disableunsafe !go1.4
|
||||
|
||||
package spew
|
||||
|
||||
import "reflect"
|
||||
|
||||
const (
|
||||
// UnsafeDisabled is a build-time constant which specifies whether or
|
||||
// not access to the unsafe package is available.
|
||||
UnsafeDisabled = true
|
||||
)
|
||||
|
||||
// unsafeReflectValue typically converts the passed reflect.Value into a one
|
||||
// that bypasses the typical safety restrictions preventing access to
|
||||
// unaddressable and unexported data. However, doing this relies on access to
|
||||
// the unsafe package. This is a stub version which simply returns the passed
|
||||
// reflect.Value when the unsafe package is not available.
|
||||
func unsafeReflectValue(v reflect.Value) reflect.Value {
|
||||
return v
|
||||
}
|
341
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/common.go
generated
vendored
Normal file
341
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/common.go
generated
vendored
Normal file
@ -0,0 +1,341 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// Some constants in the form of bytes to avoid string overhead. This mirrors
|
||||
// the technique used in the fmt package.
|
||||
var (
|
||||
panicBytes = []byte("(PANIC=")
|
||||
plusBytes = []byte("+")
|
||||
iBytes = []byte("i")
|
||||
trueBytes = []byte("true")
|
||||
falseBytes = []byte("false")
|
||||
interfaceBytes = []byte("(interface {})")
|
||||
commaNewlineBytes = []byte(",\n")
|
||||
newlineBytes = []byte("\n")
|
||||
openBraceBytes = []byte("{")
|
||||
openBraceNewlineBytes = []byte("{\n")
|
||||
closeBraceBytes = []byte("}")
|
||||
asteriskBytes = []byte("*")
|
||||
colonBytes = []byte(":")
|
||||
colonSpaceBytes = []byte(": ")
|
||||
openParenBytes = []byte("(")
|
||||
closeParenBytes = []byte(")")
|
||||
spaceBytes = []byte(" ")
|
||||
pointerChainBytes = []byte("->")
|
||||
nilAngleBytes = []byte("<nil>")
|
||||
maxNewlineBytes = []byte("<max depth reached>\n")
|
||||
maxShortBytes = []byte("<max>")
|
||||
circularBytes = []byte("<already shown>")
|
||||
circularShortBytes = []byte("<shown>")
|
||||
invalidAngleBytes = []byte("<invalid>")
|
||||
openBracketBytes = []byte("[")
|
||||
closeBracketBytes = []byte("]")
|
||||
percentBytes = []byte("%")
|
||||
precisionBytes = []byte(".")
|
||||
openAngleBytes = []byte("<")
|
||||
closeAngleBytes = []byte(">")
|
||||
openMapBytes = []byte("map[")
|
||||
closeMapBytes = []byte("]")
|
||||
lenEqualsBytes = []byte("len=")
|
||||
capEqualsBytes = []byte("cap=")
|
||||
)
|
||||
|
||||
// hexDigits is used to map a decimal value to a hex digit.
|
||||
var hexDigits = "0123456789abcdef"
|
||||
|
||||
// catchPanic handles any panics that might occur during the handleMethods
|
||||
// calls.
|
||||
func catchPanic(w io.Writer, v reflect.Value) {
|
||||
if err := recover(); err != nil {
|
||||
w.Write(panicBytes)
|
||||
fmt.Fprintf(w, "%v", err)
|
||||
w.Write(closeParenBytes)
|
||||
}
|
||||
}
|
||||
|
||||
// handleMethods attempts to call the Error and String methods on the underlying
|
||||
// type the passed reflect.Value represents and outputes the result to Writer w.
|
||||
//
|
||||
// It handles panics in any called methods by catching and displaying the error
|
||||
// as the formatted value.
|
||||
func handleMethods(cs *ConfigState, w io.Writer, v reflect.Value) (handled bool) {
|
||||
// We need an interface to check if the type implements the error or
|
||||
// Stringer interface. However, the reflect package won't give us an
|
||||
// interface on certain things like unexported struct fields in order
|
||||
// to enforce visibility rules. We use unsafe, when it's available,
|
||||
// to bypass these restrictions since this package does not mutate the
|
||||
// values.
|
||||
if !v.CanInterface() {
|
||||
if UnsafeDisabled {
|
||||
return false
|
||||
}
|
||||
|
||||
v = unsafeReflectValue(v)
|
||||
}
|
||||
|
||||
// Choose whether or not to do error and Stringer interface lookups against
|
||||
// the base type or a pointer to the base type depending on settings.
|
||||
// Technically calling one of these methods with a pointer receiver can
|
||||
// mutate the value, however, types which choose to satisify an error or
|
||||
// Stringer interface with a pointer receiver should not be mutating their
|
||||
// state inside these interface methods.
|
||||
if !cs.DisablePointerMethods && !UnsafeDisabled && !v.CanAddr() {
|
||||
v = unsafeReflectValue(v)
|
||||
}
|
||||
if v.CanAddr() {
|
||||
v = v.Addr()
|
||||
}
|
||||
|
||||
// Is it an error or Stringer?
|
||||
switch iface := v.Interface().(type) {
|
||||
case error:
|
||||
defer catchPanic(w, v)
|
||||
if cs.ContinueOnMethod {
|
||||
w.Write(openParenBytes)
|
||||
w.Write([]byte(iface.Error()))
|
||||
w.Write(closeParenBytes)
|
||||
w.Write(spaceBytes)
|
||||
return false
|
||||
}
|
||||
|
||||
w.Write([]byte(iface.Error()))
|
||||
return true
|
||||
|
||||
case fmt.Stringer:
|
||||
defer catchPanic(w, v)
|
||||
if cs.ContinueOnMethod {
|
||||
w.Write(openParenBytes)
|
||||
w.Write([]byte(iface.String()))
|
||||
w.Write(closeParenBytes)
|
||||
w.Write(spaceBytes)
|
||||
return false
|
||||
}
|
||||
w.Write([]byte(iface.String()))
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// printBool outputs a boolean value as true or false to Writer w.
|
||||
func printBool(w io.Writer, val bool) {
|
||||
if val {
|
||||
w.Write(trueBytes)
|
||||
} else {
|
||||
w.Write(falseBytes)
|
||||
}
|
||||
}
|
||||
|
||||
// printInt outputs a signed integer value to Writer w.
|
||||
func printInt(w io.Writer, val int64, base int) {
|
||||
w.Write([]byte(strconv.FormatInt(val, base)))
|
||||
}
|
||||
|
||||
// printUint outputs an unsigned integer value to Writer w.
|
||||
func printUint(w io.Writer, val uint64, base int) {
|
||||
w.Write([]byte(strconv.FormatUint(val, base)))
|
||||
}
|
||||
|
||||
// printFloat outputs a floating point value using the specified precision,
|
||||
// which is expected to be 32 or 64bit, to Writer w.
|
||||
func printFloat(w io.Writer, val float64, precision int) {
|
||||
w.Write([]byte(strconv.FormatFloat(val, 'g', -1, precision)))
|
||||
}
|
||||
|
||||
// printComplex outputs a complex value using the specified float precision
|
||||
// for the real and imaginary parts to Writer w.
|
||||
func printComplex(w io.Writer, c complex128, floatPrecision int) {
|
||||
r := real(c)
|
||||
w.Write(openParenBytes)
|
||||
w.Write([]byte(strconv.FormatFloat(r, 'g', -1, floatPrecision)))
|
||||
i := imag(c)
|
||||
if i >= 0 {
|
||||
w.Write(plusBytes)
|
||||
}
|
||||
w.Write([]byte(strconv.FormatFloat(i, 'g', -1, floatPrecision)))
|
||||
w.Write(iBytes)
|
||||
w.Write(closeParenBytes)
|
||||
}
|
||||
|
||||
// printHexPtr outputs a uintptr formatted as hexadecimal with a leading '0x'
|
||||
// prefix to Writer w.
|
||||
func printHexPtr(w io.Writer, p uintptr) {
|
||||
// Null pointer.
|
||||
num := uint64(p)
|
||||
if num == 0 {
|
||||
w.Write(nilAngleBytes)
|
||||
return
|
||||
}
|
||||
|
||||
// Max uint64 is 16 bytes in hex + 2 bytes for '0x' prefix
|
||||
buf := make([]byte, 18)
|
||||
|
||||
// It's simpler to construct the hex string right to left.
|
||||
base := uint64(16)
|
||||
i := len(buf) - 1
|
||||
for num >= base {
|
||||
buf[i] = hexDigits[num%base]
|
||||
num /= base
|
||||
i--
|
||||
}
|
||||
buf[i] = hexDigits[num]
|
||||
|
||||
// Add '0x' prefix.
|
||||
i--
|
||||
buf[i] = 'x'
|
||||
i--
|
||||
buf[i] = '0'
|
||||
|
||||
// Strip unused leading bytes.
|
||||
buf = buf[i:]
|
||||
w.Write(buf)
|
||||
}
|
||||
|
||||
// valuesSorter implements sort.Interface to allow a slice of reflect.Value
|
||||
// elements to be sorted.
|
||||
type valuesSorter struct {
|
||||
values []reflect.Value
|
||||
strings []string // either nil or same len and values
|
||||
cs *ConfigState
|
||||
}
|
||||
|
||||
// newValuesSorter initializes a valuesSorter instance, which holds a set of
|
||||
// surrogate keys on which the data should be sorted. It uses flags in
|
||||
// ConfigState to decide if and how to populate those surrogate keys.
|
||||
func newValuesSorter(values []reflect.Value, cs *ConfigState) sort.Interface {
|
||||
vs := &valuesSorter{values: values, cs: cs}
|
||||
if canSortSimply(vs.values[0].Kind()) {
|
||||
return vs
|
||||
}
|
||||
if !cs.DisableMethods {
|
||||
vs.strings = make([]string, len(values))
|
||||
for i := range vs.values {
|
||||
b := bytes.Buffer{}
|
||||
if !handleMethods(cs, &b, vs.values[i]) {
|
||||
vs.strings = nil
|
||||
break
|
||||
}
|
||||
vs.strings[i] = b.String()
|
||||
}
|
||||
}
|
||||
if vs.strings == nil && cs.SpewKeys {
|
||||
vs.strings = make([]string, len(values))
|
||||
for i := range vs.values {
|
||||
vs.strings[i] = Sprintf("%#v", vs.values[i].Interface())
|
||||
}
|
||||
}
|
||||
return vs
|
||||
}
|
||||
|
||||
// canSortSimply tests whether a reflect.Kind is a primitive that can be sorted
|
||||
// directly, or whether it should be considered for sorting by surrogate keys
|
||||
// (if the ConfigState allows it).
|
||||
func canSortSimply(kind reflect.Kind) bool {
|
||||
// This switch parallels valueSortLess, except for the default case.
|
||||
switch kind {
|
||||
case reflect.Bool:
|
||||
return true
|
||||
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
|
||||
return true
|
||||
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
|
||||
return true
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return true
|
||||
case reflect.String:
|
||||
return true
|
||||
case reflect.Uintptr:
|
||||
return true
|
||||
case reflect.Array:
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Len returns the number of values in the slice. It is part of the
|
||||
// sort.Interface implementation.
|
||||
func (s *valuesSorter) Len() int {
|
||||
return len(s.values)
|
||||
}
|
||||
|
||||
// Swap swaps the values at the passed indices. It is part of the
|
||||
// sort.Interface implementation.
|
||||
func (s *valuesSorter) Swap(i, j int) {
|
||||
s.values[i], s.values[j] = s.values[j], s.values[i]
|
||||
if s.strings != nil {
|
||||
s.strings[i], s.strings[j] = s.strings[j], s.strings[i]
|
||||
}
|
||||
}
|
||||
|
||||
// valueSortLess returns whether the first value should sort before the second
|
||||
// value. It is used by valueSorter.Less as part of the sort.Interface
|
||||
// implementation.
|
||||
func valueSortLess(a, b reflect.Value) bool {
|
||||
switch a.Kind() {
|
||||
case reflect.Bool:
|
||||
return !a.Bool() && b.Bool()
|
||||
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
|
||||
return a.Int() < b.Int()
|
||||
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
|
||||
return a.Uint() < b.Uint()
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return a.Float() < b.Float()
|
||||
case reflect.String:
|
||||
return a.String() < b.String()
|
||||
case reflect.Uintptr:
|
||||
return a.Uint() < b.Uint()
|
||||
case reflect.Array:
|
||||
// Compare the contents of both arrays.
|
||||
l := a.Len()
|
||||
for i := 0; i < l; i++ {
|
||||
av := a.Index(i)
|
||||
bv := b.Index(i)
|
||||
if av.Interface() == bv.Interface() {
|
||||
continue
|
||||
}
|
||||
return valueSortLess(av, bv)
|
||||
}
|
||||
}
|
||||
return a.String() < b.String()
|
||||
}
|
||||
|
||||
// Less returns whether the value at index i should sort before the
|
||||
// value at index j. It is part of the sort.Interface implementation.
|
||||
func (s *valuesSorter) Less(i, j int) bool {
|
||||
if s.strings == nil {
|
||||
return valueSortLess(s.values[i], s.values[j])
|
||||
}
|
||||
return s.strings[i] < s.strings[j]
|
||||
}
|
||||
|
||||
// sortValues is a sort function that handles both native types and any type that
|
||||
// can be converted to error or Stringer. Other inputs are sorted according to
|
||||
// their Value.String() value to ensure display stability.
|
||||
func sortValues(values []reflect.Value, cs *ConfigState) {
|
||||
if len(values) == 0 {
|
||||
return
|
||||
}
|
||||
sort.Sort(newValuesSorter(values, cs))
|
||||
}
|
306
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/config.go
generated
vendored
Normal file
306
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/config.go
generated
vendored
Normal file
@ -0,0 +1,306 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
)
|
||||
|
||||
// ConfigState houses the configuration options used by spew to format and
|
||||
// display values. There is a global instance, Config, that is used to control
|
||||
// all top-level Formatter and Dump functionality. Each ConfigState instance
|
||||
// provides methods equivalent to the top-level functions.
|
||||
//
|
||||
// The zero value for ConfigState provides no indentation. You would typically
|
||||
// want to set it to a space or a tab.
|
||||
//
|
||||
// Alternatively, you can use NewDefaultConfig to get a ConfigState instance
|
||||
// with default settings. See the documentation of NewDefaultConfig for default
|
||||
// values.
|
||||
type ConfigState struct {
|
||||
// Indent specifies the string to use for each indentation level. The
|
||||
// global config instance that all top-level functions use set this to a
|
||||
// single space by default. If you would like more indentation, you might
|
||||
// set this to a tab with "\t" or perhaps two spaces with " ".
|
||||
Indent string
|
||||
|
||||
// MaxDepth controls the maximum number of levels to descend into nested
|
||||
// data structures. The default, 0, means there is no limit.
|
||||
//
|
||||
// NOTE: Circular data structures are properly detected, so it is not
|
||||
// necessary to set this value unless you specifically want to limit deeply
|
||||
// nested data structures.
|
||||
MaxDepth int
|
||||
|
||||
// DisableMethods specifies whether or not error and Stringer interfaces are
|
||||
// invoked for types that implement them.
|
||||
DisableMethods bool
|
||||
|
||||
// DisablePointerMethods specifies whether or not to check for and invoke
|
||||
// error and Stringer interfaces on types which only accept a pointer
|
||||
// receiver when the current type is not a pointer.
|
||||
//
|
||||
// NOTE: This might be an unsafe action since calling one of these methods
|
||||
// with a pointer receiver could technically mutate the value, however,
|
||||
// in practice, types which choose to satisify an error or Stringer
|
||||
// interface with a pointer receiver should not be mutating their state
|
||||
// inside these interface methods. As a result, this option relies on
|
||||
// access to the unsafe package, so it will not have any effect when
|
||||
// running in environments without access to the unsafe package such as
|
||||
// Google App Engine or with the "safe" build tag specified.
|
||||
DisablePointerMethods bool
|
||||
|
||||
// DisablePointerAddresses specifies whether to disable the printing of
|
||||
// pointer addresses. This is useful when diffing data structures in tests.
|
||||
DisablePointerAddresses bool
|
||||
|
||||
// DisableCapacities specifies whether to disable the printing of capacities
|
||||
// for arrays, slices, maps and channels. This is useful when diffing
|
||||
// data structures in tests.
|
||||
DisableCapacities bool
|
||||
|
||||
// ContinueOnMethod specifies whether or not recursion should continue once
|
||||
// a custom error or Stringer interface is invoked. The default, false,
|
||||
// means it will print the results of invoking the custom error or Stringer
|
||||
// interface and return immediately instead of continuing to recurse into
|
||||
// the internals of the data type.
|
||||
//
|
||||
// NOTE: This flag does not have any effect if method invocation is disabled
|
||||
// via the DisableMethods or DisablePointerMethods options.
|
||||
ContinueOnMethod bool
|
||||
|
||||
// SortKeys specifies map keys should be sorted before being printed. Use
|
||||
// this to have a more deterministic, diffable output. Note that only
|
||||
// native types (bool, int, uint, floats, uintptr and string) and types
|
||||
// that support the error or Stringer interfaces (if methods are
|
||||
// enabled) are supported, with other types sorted according to the
|
||||
// reflect.Value.String() output which guarantees display stability.
|
||||
SortKeys bool
|
||||
|
||||
// SpewKeys specifies that, as a last resort attempt, map keys should
|
||||
// be spewed to strings and sorted by those strings. This is only
|
||||
// considered if SortKeys is true.
|
||||
SpewKeys bool
|
||||
}
|
||||
|
||||
// Config is the active configuration of the top-level functions.
|
||||
// The configuration can be changed by modifying the contents of spew.Config.
|
||||
var Config = ConfigState{Indent: " "}
|
||||
|
||||
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the formatted string as a value that satisfies error. See NewFormatter
|
||||
// for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Errorf(format, c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Errorf(format string, a ...interface{}) (err error) {
|
||||
return fmt.Errorf(format, c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Fprint(w, c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Fprint(w io.Writer, a ...interface{}) (n int, err error) {
|
||||
return fmt.Fprint(w, c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Fprintf(w, format, c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
|
||||
return fmt.Fprintf(w, format, c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
|
||||
// passed with a Formatter interface returned by c.NewFormatter. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Fprintln(w, c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
|
||||
return fmt.Fprintln(w, c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Print is a wrapper for fmt.Print that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Print(c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Print(a ...interface{}) (n int, err error) {
|
||||
return fmt.Print(c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Printf(format, c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Printf(format string, a ...interface{}) (n int, err error) {
|
||||
return fmt.Printf(format, c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Println is a wrapper for fmt.Println that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Println(c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Println(a ...interface{}) (n int, err error) {
|
||||
return fmt.Println(c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the resulting string. See NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Sprint(c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Sprint(a ...interface{}) string {
|
||||
return fmt.Sprint(c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
|
||||
// passed with a Formatter interface returned by c.NewFormatter. It returns
|
||||
// the resulting string. See NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Sprintf(format, c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Sprintf(format string, a ...interface{}) string {
|
||||
return fmt.Sprintf(format, c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
|
||||
// were passed with a Formatter interface returned by c.NewFormatter. It
|
||||
// returns the resulting string. See NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Sprintln(c.NewFormatter(a), c.NewFormatter(b))
|
||||
func (c *ConfigState) Sprintln(a ...interface{}) string {
|
||||
return fmt.Sprintln(c.convertArgs(a)...)
|
||||
}
|
||||
|
||||
/*
|
||||
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
|
||||
interface. As a result, it integrates cleanly with standard fmt package
|
||||
printing functions. The formatter is useful for inline printing of smaller data
|
||||
types similar to the standard %v format specifier.
|
||||
|
||||
The custom formatter only responds to the %v (most compact), %+v (adds pointer
|
||||
addresses), %#v (adds types), and %#+v (adds types and pointer addresses) verb
|
||||
combinations. Any other verbs such as %x and %q will be sent to the the
|
||||
standard fmt package for formatting. In addition, the custom formatter ignores
|
||||
the width and precision arguments (however they will still work on the format
|
||||
specifiers not handled by the custom formatter).
|
||||
|
||||
Typically this function shouldn't be called directly. It is much easier to make
|
||||
use of the custom formatter by calling one of the convenience functions such as
|
||||
c.Printf, c.Println, or c.Printf.
|
||||
*/
|
||||
func (c *ConfigState) NewFormatter(v interface{}) fmt.Formatter {
|
||||
return newFormatter(c, v)
|
||||
}
|
||||
|
||||
// Fdump formats and displays the passed arguments to io.Writer w. It formats
|
||||
// exactly the same as Dump.
|
||||
func (c *ConfigState) Fdump(w io.Writer, a ...interface{}) {
|
||||
fdump(c, w, a...)
|
||||
}
|
||||
|
||||
/*
|
||||
Dump displays the passed parameters to standard out with newlines, customizable
|
||||
indentation, and additional debug information such as complete types and all
|
||||
pointer addresses used to indirect to the final value. It provides the
|
||||
following features over the built-in printing facilities provided by the fmt
|
||||
package:
|
||||
|
||||
* Pointers are dereferenced and followed
|
||||
* Circular data structures are detected and handled properly
|
||||
* Custom Stringer/error interfaces are optionally invoked, including
|
||||
on unexported types
|
||||
* Custom types which only implement the Stringer/error interfaces via
|
||||
a pointer receiver are optionally invoked when passing non-pointer
|
||||
variables
|
||||
* Byte arrays and slices are dumped like the hexdump -C command which
|
||||
includes offsets, byte values in hex, and ASCII output
|
||||
|
||||
The configuration options are controlled by modifying the public members
|
||||
of c. See ConfigState for options documentation.
|
||||
|
||||
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
|
||||
get the formatted result as a string.
|
||||
*/
|
||||
func (c *ConfigState) Dump(a ...interface{}) {
|
||||
fdump(c, os.Stdout, a...)
|
||||
}
|
||||
|
||||
// Sdump returns a string with the passed arguments formatted exactly the same
|
||||
// as Dump.
|
||||
func (c *ConfigState) Sdump(a ...interface{}) string {
|
||||
var buf bytes.Buffer
|
||||
fdump(c, &buf, a...)
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
// convertArgs accepts a slice of arguments and returns a slice of the same
|
||||
// length with each argument converted to a spew Formatter interface using
|
||||
// the ConfigState associated with s.
|
||||
func (c *ConfigState) convertArgs(args []interface{}) (formatters []interface{}) {
|
||||
formatters = make([]interface{}, len(args))
|
||||
for index, arg := range args {
|
||||
formatters[index] = newFormatter(c, arg)
|
||||
}
|
||||
return formatters
|
||||
}
|
||||
|
||||
// NewDefaultConfig returns a ConfigState with the following default settings.
|
||||
//
|
||||
// Indent: " "
|
||||
// MaxDepth: 0
|
||||
// DisableMethods: false
|
||||
// DisablePointerMethods: false
|
||||
// ContinueOnMethod: false
|
||||
// SortKeys: false
|
||||
func NewDefaultConfig() *ConfigState {
|
||||
return &ConfigState{Indent: " "}
|
||||
}
|
211
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/doc.go
generated
vendored
Normal file
211
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/doc.go
generated
vendored
Normal file
@ -0,0 +1,211 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
/*
|
||||
Package spew implements a deep pretty printer for Go data structures to aid in
|
||||
debugging.
|
||||
|
||||
A quick overview of the additional features spew provides over the built-in
|
||||
printing facilities for Go data types are as follows:
|
||||
|
||||
* Pointers are dereferenced and followed
|
||||
* Circular data structures are detected and handled properly
|
||||
* Custom Stringer/error interfaces are optionally invoked, including
|
||||
on unexported types
|
||||
* Custom types which only implement the Stringer/error interfaces via
|
||||
a pointer receiver are optionally invoked when passing non-pointer
|
||||
variables
|
||||
* Byte arrays and slices are dumped like the hexdump -C command which
|
||||
includes offsets, byte values in hex, and ASCII output (only when using
|
||||
Dump style)
|
||||
|
||||
There are two different approaches spew allows for dumping Go data structures:
|
||||
|
||||
* Dump style which prints with newlines, customizable indentation,
|
||||
and additional debug information such as types and all pointer addresses
|
||||
used to indirect to the final value
|
||||
* A custom Formatter interface that integrates cleanly with the standard fmt
|
||||
package and replaces %v, %+v, %#v, and %#+v to provide inline printing
|
||||
similar to the default %v while providing the additional functionality
|
||||
outlined above and passing unsupported format verbs such as %x and %q
|
||||
along to fmt
|
||||
|
||||
Quick Start
|
||||
|
||||
This section demonstrates how to quickly get started with spew. See the
|
||||
sections below for further details on formatting and configuration options.
|
||||
|
||||
To dump a variable with full newlines, indentation, type, and pointer
|
||||
information use Dump, Fdump, or Sdump:
|
||||
spew.Dump(myVar1, myVar2, ...)
|
||||
spew.Fdump(someWriter, myVar1, myVar2, ...)
|
||||
str := spew.Sdump(myVar1, myVar2, ...)
|
||||
|
||||
Alternatively, if you would prefer to use format strings with a compacted inline
|
||||
printing style, use the convenience wrappers Printf, Fprintf, etc with
|
||||
%v (most compact), %+v (adds pointer addresses), %#v (adds types), or
|
||||
%#+v (adds types and pointer addresses):
|
||||
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
|
||||
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
|
||||
spew.Fprintf(someWriter, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
|
||||
spew.Fprintf(someWriter, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
|
||||
|
||||
Configuration Options
|
||||
|
||||
Configuration of spew is handled by fields in the ConfigState type. For
|
||||
convenience, all of the top-level functions use a global state available
|
||||
via the spew.Config global.
|
||||
|
||||
It is also possible to create a ConfigState instance that provides methods
|
||||
equivalent to the top-level functions. This allows concurrent configuration
|
||||
options. See the ConfigState documentation for more details.
|
||||
|
||||
The following configuration options are available:
|
||||
* Indent
|
||||
String to use for each indentation level for Dump functions.
|
||||
It is a single space by default. A popular alternative is "\t".
|
||||
|
||||
* MaxDepth
|
||||
Maximum number of levels to descend into nested data structures.
|
||||
There is no limit by default.
|
||||
|
||||
* DisableMethods
|
||||
Disables invocation of error and Stringer interface methods.
|
||||
Method invocation is enabled by default.
|
||||
|
||||
* DisablePointerMethods
|
||||
Disables invocation of error and Stringer interface methods on types
|
||||
which only accept pointer receivers from non-pointer variables.
|
||||
Pointer method invocation is enabled by default.
|
||||
|
||||
* DisablePointerAddresses
|
||||
DisablePointerAddresses specifies whether to disable the printing of
|
||||
pointer addresses. This is useful when diffing data structures in tests.
|
||||
|
||||
* DisableCapacities
|
||||
DisableCapacities specifies whether to disable the printing of
|
||||
capacities for arrays, slices, maps and channels. This is useful when
|
||||
diffing data structures in tests.
|
||||
|
||||
* ContinueOnMethod
|
||||
Enables recursion into types after invoking error and Stringer interface
|
||||
methods. Recursion after method invocation is disabled by default.
|
||||
|
||||
* SortKeys
|
||||
Specifies map keys should be sorted before being printed. Use
|
||||
this to have a more deterministic, diffable output. Note that
|
||||
only native types (bool, int, uint, floats, uintptr and string)
|
||||
and types which implement error or Stringer interfaces are
|
||||
supported with other types sorted according to the
|
||||
reflect.Value.String() output which guarantees display
|
||||
stability. Natural map order is used by default.
|
||||
|
||||
* SpewKeys
|
||||
Specifies that, as a last resort attempt, map keys should be
|
||||
spewed to strings and sorted by those strings. This is only
|
||||
considered if SortKeys is true.
|
||||
|
||||
Dump Usage
|
||||
|
||||
Simply call spew.Dump with a list of variables you want to dump:
|
||||
|
||||
spew.Dump(myVar1, myVar2, ...)
|
||||
|
||||
You may also call spew.Fdump if you would prefer to output to an arbitrary
|
||||
io.Writer. For example, to dump to standard error:
|
||||
|
||||
spew.Fdump(os.Stderr, myVar1, myVar2, ...)
|
||||
|
||||
A third option is to call spew.Sdump to get the formatted output as a string:
|
||||
|
||||
str := spew.Sdump(myVar1, myVar2, ...)
|
||||
|
||||
Sample Dump Output
|
||||
|
||||
See the Dump example for details on the setup of the types and variables being
|
||||
shown here.
|
||||
|
||||
(main.Foo) {
|
||||
unexportedField: (*main.Bar)(0xf84002e210)({
|
||||
flag: (main.Flag) flagTwo,
|
||||
data: (uintptr) <nil>
|
||||
}),
|
||||
ExportedField: (map[interface {}]interface {}) (len=1) {
|
||||
(string) (len=3) "one": (bool) true
|
||||
}
|
||||
}
|
||||
|
||||
Byte (and uint8) arrays and slices are displayed uniquely like the hexdump -C
|
||||
command as shown.
|
||||
([]uint8) (len=32 cap=32) {
|
||||
00000000 11 12 13 14 15 16 17 18 19 1a 1b 1c 1d 1e 1f 20 |............... |
|
||||
00000010 21 22 23 24 25 26 27 28 29 2a 2b 2c 2d 2e 2f 30 |!"#$%&'()*+,-./0|
|
||||
00000020 31 32 |12|
|
||||
}
|
||||
|
||||
Custom Formatter
|
||||
|
||||
Spew provides a custom formatter that implements the fmt.Formatter interface
|
||||
so that it integrates cleanly with standard fmt package printing functions. The
|
||||
formatter is useful for inline printing of smaller data types similar to the
|
||||
standard %v format specifier.
|
||||
|
||||
The custom formatter only responds to the %v (most compact), %+v (adds pointer
|
||||
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
|
||||
combinations. Any other verbs such as %x and %q will be sent to the the
|
||||
standard fmt package for formatting. In addition, the custom formatter ignores
|
||||
the width and precision arguments (however they will still work on the format
|
||||
specifiers not handled by the custom formatter).
|
||||
|
||||
Custom Formatter Usage
|
||||
|
||||
The simplest way to make use of the spew custom formatter is to call one of the
|
||||
convenience functions such as spew.Printf, spew.Println, or spew.Printf. The
|
||||
functions have syntax you are most likely already familiar with:
|
||||
|
||||
spew.Printf("myVar1: %v -- myVar2: %+v", myVar1, myVar2)
|
||||
spew.Printf("myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
|
||||
spew.Println(myVar, myVar2)
|
||||
spew.Fprintf(os.Stderr, "myVar1: %v -- myVar2: %+v", myVar1, myVar2)
|
||||
spew.Fprintf(os.Stderr, "myVar3: %#v -- myVar4: %#+v", myVar3, myVar4)
|
||||
|
||||
See the Index for the full list convenience functions.
|
||||
|
||||
Sample Formatter Output
|
||||
|
||||
Double pointer to a uint8:
|
||||
%v: <**>5
|
||||
%+v: <**>(0xf8400420d0->0xf8400420c8)5
|
||||
%#v: (**uint8)5
|
||||
%#+v: (**uint8)(0xf8400420d0->0xf8400420c8)5
|
||||
|
||||
Pointer to circular struct with a uint8 field and a pointer to itself:
|
||||
%v: <*>{1 <*><shown>}
|
||||
%+v: <*>(0xf84003e260){ui8:1 c:<*>(0xf84003e260)<shown>}
|
||||
%#v: (*main.circular){ui8:(uint8)1 c:(*main.circular)<shown>}
|
||||
%#+v: (*main.circular)(0xf84003e260){ui8:(uint8)1 c:(*main.circular)(0xf84003e260)<shown>}
|
||||
|
||||
See the Printf example for details on the setup of variables being shown
|
||||
here.
|
||||
|
||||
Errors
|
||||
|
||||
Since it is possible for custom Stringer/error interfaces to panic, spew
|
||||
detects them and handles them internally by printing the panic information
|
||||
inline with the output. Since spew is intended to provide deep pretty printing
|
||||
capabilities on structures, it intentionally does not return any errors.
|
||||
*/
|
||||
package spew
|
509
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/dump.go
generated
vendored
Normal file
509
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/dump.go
generated
vendored
Normal file
@ -0,0 +1,509 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"reflect"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
// uint8Type is a reflect.Type representing a uint8. It is used to
|
||||
// convert cgo types to uint8 slices for hexdumping.
|
||||
uint8Type = reflect.TypeOf(uint8(0))
|
||||
|
||||
// cCharRE is a regular expression that matches a cgo char.
|
||||
// It is used to detect character arrays to hexdump them.
|
||||
cCharRE = regexp.MustCompile(`^.*\._Ctype_char$`)
|
||||
|
||||
// cUnsignedCharRE is a regular expression that matches a cgo unsigned
|
||||
// char. It is used to detect unsigned character arrays to hexdump
|
||||
// them.
|
||||
cUnsignedCharRE = regexp.MustCompile(`^.*\._Ctype_unsignedchar$`)
|
||||
|
||||
// cUint8tCharRE is a regular expression that matches a cgo uint8_t.
|
||||
// It is used to detect uint8_t arrays to hexdump them.
|
||||
cUint8tCharRE = regexp.MustCompile(`^.*\._Ctype_uint8_t$`)
|
||||
)
|
||||
|
||||
// dumpState contains information about the state of a dump operation.
|
||||
type dumpState struct {
|
||||
w io.Writer
|
||||
depth int
|
||||
pointers map[uintptr]int
|
||||
ignoreNextType bool
|
||||
ignoreNextIndent bool
|
||||
cs *ConfigState
|
||||
}
|
||||
|
||||
// indent performs indentation according to the depth level and cs.Indent
|
||||
// option.
|
||||
func (d *dumpState) indent() {
|
||||
if d.ignoreNextIndent {
|
||||
d.ignoreNextIndent = false
|
||||
return
|
||||
}
|
||||
d.w.Write(bytes.Repeat([]byte(d.cs.Indent), d.depth))
|
||||
}
|
||||
|
||||
// unpackValue returns values inside of non-nil interfaces when possible.
|
||||
// This is useful for data types like structs, arrays, slices, and maps which
|
||||
// can contain varying types packed inside an interface.
|
||||
func (d *dumpState) unpackValue(v reflect.Value) reflect.Value {
|
||||
if v.Kind() == reflect.Interface && !v.IsNil() {
|
||||
v = v.Elem()
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
// dumpPtr handles formatting of pointers by indirecting them as necessary.
|
||||
func (d *dumpState) dumpPtr(v reflect.Value) {
|
||||
// Remove pointers at or below the current depth from map used to detect
|
||||
// circular refs.
|
||||
for k, depth := range d.pointers {
|
||||
if depth >= d.depth {
|
||||
delete(d.pointers, k)
|
||||
}
|
||||
}
|
||||
|
||||
// Keep list of all dereferenced pointers to show later.
|
||||
pointerChain := make([]uintptr, 0)
|
||||
|
||||
// Figure out how many levels of indirection there are by dereferencing
|
||||
// pointers and unpacking interfaces down the chain while detecting circular
|
||||
// references.
|
||||
nilFound := false
|
||||
cycleFound := false
|
||||
indirects := 0
|
||||
ve := v
|
||||
for ve.Kind() == reflect.Ptr {
|
||||
if ve.IsNil() {
|
||||
nilFound = true
|
||||
break
|
||||
}
|
||||
indirects++
|
||||
addr := ve.Pointer()
|
||||
pointerChain = append(pointerChain, addr)
|
||||
if pd, ok := d.pointers[addr]; ok && pd < d.depth {
|
||||
cycleFound = true
|
||||
indirects--
|
||||
break
|
||||
}
|
||||
d.pointers[addr] = d.depth
|
||||
|
||||
ve = ve.Elem()
|
||||
if ve.Kind() == reflect.Interface {
|
||||
if ve.IsNil() {
|
||||
nilFound = true
|
||||
break
|
||||
}
|
||||
ve = ve.Elem()
|
||||
}
|
||||
}
|
||||
|
||||
// Display type information.
|
||||
d.w.Write(openParenBytes)
|
||||
d.w.Write(bytes.Repeat(asteriskBytes, indirects))
|
||||
d.w.Write([]byte(ve.Type().String()))
|
||||
d.w.Write(closeParenBytes)
|
||||
|
||||
// Display pointer information.
|
||||
if !d.cs.DisablePointerAddresses && len(pointerChain) > 0 {
|
||||
d.w.Write(openParenBytes)
|
||||
for i, addr := range pointerChain {
|
||||
if i > 0 {
|
||||
d.w.Write(pointerChainBytes)
|
||||
}
|
||||
printHexPtr(d.w, addr)
|
||||
}
|
||||
d.w.Write(closeParenBytes)
|
||||
}
|
||||
|
||||
// Display dereferenced value.
|
||||
d.w.Write(openParenBytes)
|
||||
switch {
|
||||
case nilFound:
|
||||
d.w.Write(nilAngleBytes)
|
||||
|
||||
case cycleFound:
|
||||
d.w.Write(circularBytes)
|
||||
|
||||
default:
|
||||
d.ignoreNextType = true
|
||||
d.dump(ve)
|
||||
}
|
||||
d.w.Write(closeParenBytes)
|
||||
}
|
||||
|
||||
// dumpSlice handles formatting of arrays and slices. Byte (uint8 under
|
||||
// reflection) arrays and slices are dumped in hexdump -C fashion.
|
||||
func (d *dumpState) dumpSlice(v reflect.Value) {
|
||||
// Determine whether this type should be hex dumped or not. Also,
|
||||
// for types which should be hexdumped, try to use the underlying data
|
||||
// first, then fall back to trying to convert them to a uint8 slice.
|
||||
var buf []uint8
|
||||
doConvert := false
|
||||
doHexDump := false
|
||||
numEntries := v.Len()
|
||||
if numEntries > 0 {
|
||||
vt := v.Index(0).Type()
|
||||
vts := vt.String()
|
||||
switch {
|
||||
// C types that need to be converted.
|
||||
case cCharRE.MatchString(vts):
|
||||
fallthrough
|
||||
case cUnsignedCharRE.MatchString(vts):
|
||||
fallthrough
|
||||
case cUint8tCharRE.MatchString(vts):
|
||||
doConvert = true
|
||||
|
||||
// Try to use existing uint8 slices and fall back to converting
|
||||
// and copying if that fails.
|
||||
case vt.Kind() == reflect.Uint8:
|
||||
// We need an addressable interface to convert the type
|
||||
// to a byte slice. However, the reflect package won't
|
||||
// give us an interface on certain things like
|
||||
// unexported struct fields in order to enforce
|
||||
// visibility rules. We use unsafe, when available, to
|
||||
// bypass these restrictions since this package does not
|
||||
// mutate the values.
|
||||
vs := v
|
||||
if !vs.CanInterface() || !vs.CanAddr() {
|
||||
vs = unsafeReflectValue(vs)
|
||||
}
|
||||
if !UnsafeDisabled {
|
||||
vs = vs.Slice(0, numEntries)
|
||||
|
||||
// Use the existing uint8 slice if it can be
|
||||
// type asserted.
|
||||
iface := vs.Interface()
|
||||
if slice, ok := iface.([]uint8); ok {
|
||||
buf = slice
|
||||
doHexDump = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// The underlying data needs to be converted if it can't
|
||||
// be type asserted to a uint8 slice.
|
||||
doConvert = true
|
||||
}
|
||||
|
||||
// Copy and convert the underlying type if needed.
|
||||
if doConvert && vt.ConvertibleTo(uint8Type) {
|
||||
// Convert and copy each element into a uint8 byte
|
||||
// slice.
|
||||
buf = make([]uint8, numEntries)
|
||||
for i := 0; i < numEntries; i++ {
|
||||
vv := v.Index(i)
|
||||
buf[i] = uint8(vv.Convert(uint8Type).Uint())
|
||||
}
|
||||
doHexDump = true
|
||||
}
|
||||
}
|
||||
|
||||
// Hexdump the entire slice as needed.
|
||||
if doHexDump {
|
||||
indent := strings.Repeat(d.cs.Indent, d.depth)
|
||||
str := indent + hex.Dump(buf)
|
||||
str = strings.Replace(str, "\n", "\n"+indent, -1)
|
||||
str = strings.TrimRight(str, d.cs.Indent)
|
||||
d.w.Write([]byte(str))
|
||||
return
|
||||
}
|
||||
|
||||
// Recursively call dump for each item.
|
||||
for i := 0; i < numEntries; i++ {
|
||||
d.dump(d.unpackValue(v.Index(i)))
|
||||
if i < (numEntries - 1) {
|
||||
d.w.Write(commaNewlineBytes)
|
||||
} else {
|
||||
d.w.Write(newlineBytes)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// dump is the main workhorse for dumping a value. It uses the passed reflect
|
||||
// value to figure out what kind of object we are dealing with and formats it
|
||||
// appropriately. It is a recursive function, however circular data structures
|
||||
// are detected and handled properly.
|
||||
func (d *dumpState) dump(v reflect.Value) {
|
||||
// Handle invalid reflect values immediately.
|
||||
kind := v.Kind()
|
||||
if kind == reflect.Invalid {
|
||||
d.w.Write(invalidAngleBytes)
|
||||
return
|
||||
}
|
||||
|
||||
// Handle pointers specially.
|
||||
if kind == reflect.Ptr {
|
||||
d.indent()
|
||||
d.dumpPtr(v)
|
||||
return
|
||||
}
|
||||
|
||||
// Print type information unless already handled elsewhere.
|
||||
if !d.ignoreNextType {
|
||||
d.indent()
|
||||
d.w.Write(openParenBytes)
|
||||
d.w.Write([]byte(v.Type().String()))
|
||||
d.w.Write(closeParenBytes)
|
||||
d.w.Write(spaceBytes)
|
||||
}
|
||||
d.ignoreNextType = false
|
||||
|
||||
// Display length and capacity if the built-in len and cap functions
|
||||
// work with the value's kind and the len/cap itself is non-zero.
|
||||
valueLen, valueCap := 0, 0
|
||||
switch v.Kind() {
|
||||
case reflect.Array, reflect.Slice, reflect.Chan:
|
||||
valueLen, valueCap = v.Len(), v.Cap()
|
||||
case reflect.Map, reflect.String:
|
||||
valueLen = v.Len()
|
||||
}
|
||||
if valueLen != 0 || !d.cs.DisableCapacities && valueCap != 0 {
|
||||
d.w.Write(openParenBytes)
|
||||
if valueLen != 0 {
|
||||
d.w.Write(lenEqualsBytes)
|
||||
printInt(d.w, int64(valueLen), 10)
|
||||
}
|
||||
if !d.cs.DisableCapacities && valueCap != 0 {
|
||||
if valueLen != 0 {
|
||||
d.w.Write(spaceBytes)
|
||||
}
|
||||
d.w.Write(capEqualsBytes)
|
||||
printInt(d.w, int64(valueCap), 10)
|
||||
}
|
||||
d.w.Write(closeParenBytes)
|
||||
d.w.Write(spaceBytes)
|
||||
}
|
||||
|
||||
// Call Stringer/error interfaces if they exist and the handle methods flag
|
||||
// is enabled
|
||||
if !d.cs.DisableMethods {
|
||||
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
|
||||
if handled := handleMethods(d.cs, d.w, v); handled {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
switch kind {
|
||||
case reflect.Invalid:
|
||||
// Do nothing. We should never get here since invalid has already
|
||||
// been handled above.
|
||||
|
||||
case reflect.Bool:
|
||||
printBool(d.w, v.Bool())
|
||||
|
||||
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
|
||||
printInt(d.w, v.Int(), 10)
|
||||
|
||||
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
|
||||
printUint(d.w, v.Uint(), 10)
|
||||
|
||||
case reflect.Float32:
|
||||
printFloat(d.w, v.Float(), 32)
|
||||
|
||||
case reflect.Float64:
|
||||
printFloat(d.w, v.Float(), 64)
|
||||
|
||||
case reflect.Complex64:
|
||||
printComplex(d.w, v.Complex(), 32)
|
||||
|
||||
case reflect.Complex128:
|
||||
printComplex(d.w, v.Complex(), 64)
|
||||
|
||||
case reflect.Slice:
|
||||
if v.IsNil() {
|
||||
d.w.Write(nilAngleBytes)
|
||||
break
|
||||
}
|
||||
fallthrough
|
||||
|
||||
case reflect.Array:
|
||||
d.w.Write(openBraceNewlineBytes)
|
||||
d.depth++
|
||||
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
|
||||
d.indent()
|
||||
d.w.Write(maxNewlineBytes)
|
||||
} else {
|
||||
d.dumpSlice(v)
|
||||
}
|
||||
d.depth--
|
||||
d.indent()
|
||||
d.w.Write(closeBraceBytes)
|
||||
|
||||
case reflect.String:
|
||||
d.w.Write([]byte(strconv.Quote(v.String())))
|
||||
|
||||
case reflect.Interface:
|
||||
// The only time we should get here is for nil interfaces due to
|
||||
// unpackValue calls.
|
||||
if v.IsNil() {
|
||||
d.w.Write(nilAngleBytes)
|
||||
}
|
||||
|
||||
case reflect.Ptr:
|
||||
// Do nothing. We should never get here since pointers have already
|
||||
// been handled above.
|
||||
|
||||
case reflect.Map:
|
||||
// nil maps should be indicated as different than empty maps
|
||||
if v.IsNil() {
|
||||
d.w.Write(nilAngleBytes)
|
||||
break
|
||||
}
|
||||
|
||||
d.w.Write(openBraceNewlineBytes)
|
||||
d.depth++
|
||||
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
|
||||
d.indent()
|
||||
d.w.Write(maxNewlineBytes)
|
||||
} else {
|
||||
numEntries := v.Len()
|
||||
keys := v.MapKeys()
|
||||
if d.cs.SortKeys {
|
||||
sortValues(keys, d.cs)
|
||||
}
|
||||
for i, key := range keys {
|
||||
d.dump(d.unpackValue(key))
|
||||
d.w.Write(colonSpaceBytes)
|
||||
d.ignoreNextIndent = true
|
||||
d.dump(d.unpackValue(v.MapIndex(key)))
|
||||
if i < (numEntries - 1) {
|
||||
d.w.Write(commaNewlineBytes)
|
||||
} else {
|
||||
d.w.Write(newlineBytes)
|
||||
}
|
||||
}
|
||||
}
|
||||
d.depth--
|
||||
d.indent()
|
||||
d.w.Write(closeBraceBytes)
|
||||
|
||||
case reflect.Struct:
|
||||
d.w.Write(openBraceNewlineBytes)
|
||||
d.depth++
|
||||
if (d.cs.MaxDepth != 0) && (d.depth > d.cs.MaxDepth) {
|
||||
d.indent()
|
||||
d.w.Write(maxNewlineBytes)
|
||||
} else {
|
||||
vt := v.Type()
|
||||
numFields := v.NumField()
|
||||
for i := 0; i < numFields; i++ {
|
||||
d.indent()
|
||||
vtf := vt.Field(i)
|
||||
d.w.Write([]byte(vtf.Name))
|
||||
d.w.Write(colonSpaceBytes)
|
||||
d.ignoreNextIndent = true
|
||||
d.dump(d.unpackValue(v.Field(i)))
|
||||
if i < (numFields - 1) {
|
||||
d.w.Write(commaNewlineBytes)
|
||||
} else {
|
||||
d.w.Write(newlineBytes)
|
||||
}
|
||||
}
|
||||
}
|
||||
d.depth--
|
||||
d.indent()
|
||||
d.w.Write(closeBraceBytes)
|
||||
|
||||
case reflect.Uintptr:
|
||||
printHexPtr(d.w, uintptr(v.Uint()))
|
||||
|
||||
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
|
||||
printHexPtr(d.w, v.Pointer())
|
||||
|
||||
// There were not any other types at the time this code was written, but
|
||||
// fall back to letting the default fmt package handle it in case any new
|
||||
// types are added.
|
||||
default:
|
||||
if v.CanInterface() {
|
||||
fmt.Fprintf(d.w, "%v", v.Interface())
|
||||
} else {
|
||||
fmt.Fprintf(d.w, "%v", v.String())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// fdump is a helper function to consolidate the logic from the various public
|
||||
// methods which take varying writers and config states.
|
||||
func fdump(cs *ConfigState, w io.Writer, a ...interface{}) {
|
||||
for _, arg := range a {
|
||||
if arg == nil {
|
||||
w.Write(interfaceBytes)
|
||||
w.Write(spaceBytes)
|
||||
w.Write(nilAngleBytes)
|
||||
w.Write(newlineBytes)
|
||||
continue
|
||||
}
|
||||
|
||||
d := dumpState{w: w, cs: cs}
|
||||
d.pointers = make(map[uintptr]int)
|
||||
d.dump(reflect.ValueOf(arg))
|
||||
d.w.Write(newlineBytes)
|
||||
}
|
||||
}
|
||||
|
||||
// Fdump formats and displays the passed arguments to io.Writer w. It formats
|
||||
// exactly the same as Dump.
|
||||
func Fdump(w io.Writer, a ...interface{}) {
|
||||
fdump(&Config, w, a...)
|
||||
}
|
||||
|
||||
// Sdump returns a string with the passed arguments formatted exactly the same
|
||||
// as Dump.
|
||||
func Sdump(a ...interface{}) string {
|
||||
var buf bytes.Buffer
|
||||
fdump(&Config, &buf, a...)
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
/*
|
||||
Dump displays the passed parameters to standard out with newlines, customizable
|
||||
indentation, and additional debug information such as complete types and all
|
||||
pointer addresses used to indirect to the final value. It provides the
|
||||
following features over the built-in printing facilities provided by the fmt
|
||||
package:
|
||||
|
||||
* Pointers are dereferenced and followed
|
||||
* Circular data structures are detected and handled properly
|
||||
* Custom Stringer/error interfaces are optionally invoked, including
|
||||
on unexported types
|
||||
* Custom types which only implement the Stringer/error interfaces via
|
||||
a pointer receiver are optionally invoked when passing non-pointer
|
||||
variables
|
||||
* Byte arrays and slices are dumped like the hexdump -C command which
|
||||
includes offsets, byte values in hex, and ASCII output
|
||||
|
||||
The configuration options are controlled by an exported package global,
|
||||
spew.Config. See ConfigState for options documentation.
|
||||
|
||||
See Fdump if you would prefer dumping to an arbitrary io.Writer or Sdump to
|
||||
get the formatted result as a string.
|
||||
*/
|
||||
func Dump(a ...interface{}) {
|
||||
fdump(&Config, os.Stdout, a...)
|
||||
}
|
419
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/format.go
generated
vendored
Normal file
419
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/format.go
generated
vendored
Normal file
@ -0,0 +1,419 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// supportedFlags is a list of all the character flags supported by fmt package.
|
||||
const supportedFlags = "0-+# "
|
||||
|
||||
// formatState implements the fmt.Formatter interface and contains information
|
||||
// about the state of a formatting operation. The NewFormatter function can
|
||||
// be used to get a new Formatter which can be used directly as arguments
|
||||
// in standard fmt package printing calls.
|
||||
type formatState struct {
|
||||
value interface{}
|
||||
fs fmt.State
|
||||
depth int
|
||||
pointers map[uintptr]int
|
||||
ignoreNextType bool
|
||||
cs *ConfigState
|
||||
}
|
||||
|
||||
// buildDefaultFormat recreates the original format string without precision
|
||||
// and width information to pass in to fmt.Sprintf in the case of an
|
||||
// unrecognized type. Unless new types are added to the language, this
|
||||
// function won't ever be called.
|
||||
func (f *formatState) buildDefaultFormat() (format string) {
|
||||
buf := bytes.NewBuffer(percentBytes)
|
||||
|
||||
for _, flag := range supportedFlags {
|
||||
if f.fs.Flag(int(flag)) {
|
||||
buf.WriteRune(flag)
|
||||
}
|
||||
}
|
||||
|
||||
buf.WriteRune('v')
|
||||
|
||||
format = buf.String()
|
||||
return format
|
||||
}
|
||||
|
||||
// constructOrigFormat recreates the original format string including precision
|
||||
// and width information to pass along to the standard fmt package. This allows
|
||||
// automatic deferral of all format strings this package doesn't support.
|
||||
func (f *formatState) constructOrigFormat(verb rune) (format string) {
|
||||
buf := bytes.NewBuffer(percentBytes)
|
||||
|
||||
for _, flag := range supportedFlags {
|
||||
if f.fs.Flag(int(flag)) {
|
||||
buf.WriteRune(flag)
|
||||
}
|
||||
}
|
||||
|
||||
if width, ok := f.fs.Width(); ok {
|
||||
buf.WriteString(strconv.Itoa(width))
|
||||
}
|
||||
|
||||
if precision, ok := f.fs.Precision(); ok {
|
||||
buf.Write(precisionBytes)
|
||||
buf.WriteString(strconv.Itoa(precision))
|
||||
}
|
||||
|
||||
buf.WriteRune(verb)
|
||||
|
||||
format = buf.String()
|
||||
return format
|
||||
}
|
||||
|
||||
// unpackValue returns values inside of non-nil interfaces when possible and
|
||||
// ensures that types for values which have been unpacked from an interface
|
||||
// are displayed when the show types flag is also set.
|
||||
// This is useful for data types like structs, arrays, slices, and maps which
|
||||
// can contain varying types packed inside an interface.
|
||||
func (f *formatState) unpackValue(v reflect.Value) reflect.Value {
|
||||
if v.Kind() == reflect.Interface {
|
||||
f.ignoreNextType = false
|
||||
if !v.IsNil() {
|
||||
v = v.Elem()
|
||||
}
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
// formatPtr handles formatting of pointers by indirecting them as necessary.
|
||||
func (f *formatState) formatPtr(v reflect.Value) {
|
||||
// Display nil if top level pointer is nil.
|
||||
showTypes := f.fs.Flag('#')
|
||||
if v.IsNil() && (!showTypes || f.ignoreNextType) {
|
||||
f.fs.Write(nilAngleBytes)
|
||||
return
|
||||
}
|
||||
|
||||
// Remove pointers at or below the current depth from map used to detect
|
||||
// circular refs.
|
||||
for k, depth := range f.pointers {
|
||||
if depth >= f.depth {
|
||||
delete(f.pointers, k)
|
||||
}
|
||||
}
|
||||
|
||||
// Keep list of all dereferenced pointers to possibly show later.
|
||||
pointerChain := make([]uintptr, 0)
|
||||
|
||||
// Figure out how many levels of indirection there are by derferencing
|
||||
// pointers and unpacking interfaces down the chain while detecting circular
|
||||
// references.
|
||||
nilFound := false
|
||||
cycleFound := false
|
||||
indirects := 0
|
||||
ve := v
|
||||
for ve.Kind() == reflect.Ptr {
|
||||
if ve.IsNil() {
|
||||
nilFound = true
|
||||
break
|
||||
}
|
||||
indirects++
|
||||
addr := ve.Pointer()
|
||||
pointerChain = append(pointerChain, addr)
|
||||
if pd, ok := f.pointers[addr]; ok && pd < f.depth {
|
||||
cycleFound = true
|
||||
indirects--
|
||||
break
|
||||
}
|
||||
f.pointers[addr] = f.depth
|
||||
|
||||
ve = ve.Elem()
|
||||
if ve.Kind() == reflect.Interface {
|
||||
if ve.IsNil() {
|
||||
nilFound = true
|
||||
break
|
||||
}
|
||||
ve = ve.Elem()
|
||||
}
|
||||
}
|
||||
|
||||
// Display type or indirection level depending on flags.
|
||||
if showTypes && !f.ignoreNextType {
|
||||
f.fs.Write(openParenBytes)
|
||||
f.fs.Write(bytes.Repeat(asteriskBytes, indirects))
|
||||
f.fs.Write([]byte(ve.Type().String()))
|
||||
f.fs.Write(closeParenBytes)
|
||||
} else {
|
||||
if nilFound || cycleFound {
|
||||
indirects += strings.Count(ve.Type().String(), "*")
|
||||
}
|
||||
f.fs.Write(openAngleBytes)
|
||||
f.fs.Write([]byte(strings.Repeat("*", indirects)))
|
||||
f.fs.Write(closeAngleBytes)
|
||||
}
|
||||
|
||||
// Display pointer information depending on flags.
|
||||
if f.fs.Flag('+') && (len(pointerChain) > 0) {
|
||||
f.fs.Write(openParenBytes)
|
||||
for i, addr := range pointerChain {
|
||||
if i > 0 {
|
||||
f.fs.Write(pointerChainBytes)
|
||||
}
|
||||
printHexPtr(f.fs, addr)
|
||||
}
|
||||
f.fs.Write(closeParenBytes)
|
||||
}
|
||||
|
||||
// Display dereferenced value.
|
||||
switch {
|
||||
case nilFound:
|
||||
f.fs.Write(nilAngleBytes)
|
||||
|
||||
case cycleFound:
|
||||
f.fs.Write(circularShortBytes)
|
||||
|
||||
default:
|
||||
f.ignoreNextType = true
|
||||
f.format(ve)
|
||||
}
|
||||
}
|
||||
|
||||
// format is the main workhorse for providing the Formatter interface. It
|
||||
// uses the passed reflect value to figure out what kind of object we are
|
||||
// dealing with and formats it appropriately. It is a recursive function,
|
||||
// however circular data structures are detected and handled properly.
|
||||
func (f *formatState) format(v reflect.Value) {
|
||||
// Handle invalid reflect values immediately.
|
||||
kind := v.Kind()
|
||||
if kind == reflect.Invalid {
|
||||
f.fs.Write(invalidAngleBytes)
|
||||
return
|
||||
}
|
||||
|
||||
// Handle pointers specially.
|
||||
if kind == reflect.Ptr {
|
||||
f.formatPtr(v)
|
||||
return
|
||||
}
|
||||
|
||||
// Print type information unless already handled elsewhere.
|
||||
if !f.ignoreNextType && f.fs.Flag('#') {
|
||||
f.fs.Write(openParenBytes)
|
||||
f.fs.Write([]byte(v.Type().String()))
|
||||
f.fs.Write(closeParenBytes)
|
||||
}
|
||||
f.ignoreNextType = false
|
||||
|
||||
// Call Stringer/error interfaces if they exist and the handle methods
|
||||
// flag is enabled.
|
||||
if !f.cs.DisableMethods {
|
||||
if (kind != reflect.Invalid) && (kind != reflect.Interface) {
|
||||
if handled := handleMethods(f.cs, f.fs, v); handled {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
switch kind {
|
||||
case reflect.Invalid:
|
||||
// Do nothing. We should never get here since invalid has already
|
||||
// been handled above.
|
||||
|
||||
case reflect.Bool:
|
||||
printBool(f.fs, v.Bool())
|
||||
|
||||
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Int:
|
||||
printInt(f.fs, v.Int(), 10)
|
||||
|
||||
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uint:
|
||||
printUint(f.fs, v.Uint(), 10)
|
||||
|
||||
case reflect.Float32:
|
||||
printFloat(f.fs, v.Float(), 32)
|
||||
|
||||
case reflect.Float64:
|
||||
printFloat(f.fs, v.Float(), 64)
|
||||
|
||||
case reflect.Complex64:
|
||||
printComplex(f.fs, v.Complex(), 32)
|
||||
|
||||
case reflect.Complex128:
|
||||
printComplex(f.fs, v.Complex(), 64)
|
||||
|
||||
case reflect.Slice:
|
||||
if v.IsNil() {
|
||||
f.fs.Write(nilAngleBytes)
|
||||
break
|
||||
}
|
||||
fallthrough
|
||||
|
||||
case reflect.Array:
|
||||
f.fs.Write(openBracketBytes)
|
||||
f.depth++
|
||||
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
|
||||
f.fs.Write(maxShortBytes)
|
||||
} else {
|
||||
numEntries := v.Len()
|
||||
for i := 0; i < numEntries; i++ {
|
||||
if i > 0 {
|
||||
f.fs.Write(spaceBytes)
|
||||
}
|
||||
f.ignoreNextType = true
|
||||
f.format(f.unpackValue(v.Index(i)))
|
||||
}
|
||||
}
|
||||
f.depth--
|
||||
f.fs.Write(closeBracketBytes)
|
||||
|
||||
case reflect.String:
|
||||
f.fs.Write([]byte(v.String()))
|
||||
|
||||
case reflect.Interface:
|
||||
// The only time we should get here is for nil interfaces due to
|
||||
// unpackValue calls.
|
||||
if v.IsNil() {
|
||||
f.fs.Write(nilAngleBytes)
|
||||
}
|
||||
|
||||
case reflect.Ptr:
|
||||
// Do nothing. We should never get here since pointers have already
|
||||
// been handled above.
|
||||
|
||||
case reflect.Map:
|
||||
// nil maps should be indicated as different than empty maps
|
||||
if v.IsNil() {
|
||||
f.fs.Write(nilAngleBytes)
|
||||
break
|
||||
}
|
||||
|
||||
f.fs.Write(openMapBytes)
|
||||
f.depth++
|
||||
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
|
||||
f.fs.Write(maxShortBytes)
|
||||
} else {
|
||||
keys := v.MapKeys()
|
||||
if f.cs.SortKeys {
|
||||
sortValues(keys, f.cs)
|
||||
}
|
||||
for i, key := range keys {
|
||||
if i > 0 {
|
||||
f.fs.Write(spaceBytes)
|
||||
}
|
||||
f.ignoreNextType = true
|
||||
f.format(f.unpackValue(key))
|
||||
f.fs.Write(colonBytes)
|
||||
f.ignoreNextType = true
|
||||
f.format(f.unpackValue(v.MapIndex(key)))
|
||||
}
|
||||
}
|
||||
f.depth--
|
||||
f.fs.Write(closeMapBytes)
|
||||
|
||||
case reflect.Struct:
|
||||
numFields := v.NumField()
|
||||
f.fs.Write(openBraceBytes)
|
||||
f.depth++
|
||||
if (f.cs.MaxDepth != 0) && (f.depth > f.cs.MaxDepth) {
|
||||
f.fs.Write(maxShortBytes)
|
||||
} else {
|
||||
vt := v.Type()
|
||||
for i := 0; i < numFields; i++ {
|
||||
if i > 0 {
|
||||
f.fs.Write(spaceBytes)
|
||||
}
|
||||
vtf := vt.Field(i)
|
||||
if f.fs.Flag('+') || f.fs.Flag('#') {
|
||||
f.fs.Write([]byte(vtf.Name))
|
||||
f.fs.Write(colonBytes)
|
||||
}
|
||||
f.format(f.unpackValue(v.Field(i)))
|
||||
}
|
||||
}
|
||||
f.depth--
|
||||
f.fs.Write(closeBraceBytes)
|
||||
|
||||
case reflect.Uintptr:
|
||||
printHexPtr(f.fs, uintptr(v.Uint()))
|
||||
|
||||
case reflect.UnsafePointer, reflect.Chan, reflect.Func:
|
||||
printHexPtr(f.fs, v.Pointer())
|
||||
|
||||
// There were not any other types at the time this code was written, but
|
||||
// fall back to letting the default fmt package handle it if any get added.
|
||||
default:
|
||||
format := f.buildDefaultFormat()
|
||||
if v.CanInterface() {
|
||||
fmt.Fprintf(f.fs, format, v.Interface())
|
||||
} else {
|
||||
fmt.Fprintf(f.fs, format, v.String())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Format satisfies the fmt.Formatter interface. See NewFormatter for usage
|
||||
// details.
|
||||
func (f *formatState) Format(fs fmt.State, verb rune) {
|
||||
f.fs = fs
|
||||
|
||||
// Use standard formatting for verbs that are not v.
|
||||
if verb != 'v' {
|
||||
format := f.constructOrigFormat(verb)
|
||||
fmt.Fprintf(fs, format, f.value)
|
||||
return
|
||||
}
|
||||
|
||||
if f.value == nil {
|
||||
if fs.Flag('#') {
|
||||
fs.Write(interfaceBytes)
|
||||
}
|
||||
fs.Write(nilAngleBytes)
|
||||
return
|
||||
}
|
||||
|
||||
f.format(reflect.ValueOf(f.value))
|
||||
}
|
||||
|
||||
// newFormatter is a helper function to consolidate the logic from the various
|
||||
// public methods which take varying config states.
|
||||
func newFormatter(cs *ConfigState, v interface{}) fmt.Formatter {
|
||||
fs := &formatState{value: v, cs: cs}
|
||||
fs.pointers = make(map[uintptr]int)
|
||||
return fs
|
||||
}
|
||||
|
||||
/*
|
||||
NewFormatter returns a custom formatter that satisfies the fmt.Formatter
|
||||
interface. As a result, it integrates cleanly with standard fmt package
|
||||
printing functions. The formatter is useful for inline printing of smaller data
|
||||
types similar to the standard %v format specifier.
|
||||
|
||||
The custom formatter only responds to the %v (most compact), %+v (adds pointer
|
||||
addresses), %#v (adds types), or %#+v (adds types and pointer addresses) verb
|
||||
combinations. Any other verbs such as %x and %q will be sent to the the
|
||||
standard fmt package for formatting. In addition, the custom formatter ignores
|
||||
the width and precision arguments (however they will still work on the format
|
||||
specifiers not handled by the custom formatter).
|
||||
|
||||
Typically this function shouldn't be called directly. It is much easier to make
|
||||
use of the custom formatter by calling one of the convenience functions such as
|
||||
Printf, Println, or Fprintf.
|
||||
*/
|
||||
func NewFormatter(v interface{}) fmt.Formatter {
|
||||
return newFormatter(&Config, v)
|
||||
}
|
148
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/spew.go
generated
vendored
Normal file
148
endgamefiles/sourcecode/gobalance/vendor/github.com/davecgh/go-spew/spew/spew.go
generated
vendored
Normal file
@ -0,0 +1,148 @@
|
||||
/*
|
||||
* Copyright (c) 2013-2016 Dave Collins <dave@davec.name>
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software for any
|
||||
* purpose with or without fee is hereby granted, provided that the above
|
||||
* copyright notice and this permission notice appear in all copies.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
|
||||
* WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
|
||||
* MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
|
||||
* ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
|
||||
* WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
|
||||
* ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
|
||||
* OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
|
||||
*/
|
||||
|
||||
package spew
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
)
|
||||
|
||||
// Errorf is a wrapper for fmt.Errorf that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the formatted string as a value that satisfies error. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Errorf(format, spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Errorf(format string, a ...interface{}) (err error) {
|
||||
return fmt.Errorf(format, convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Fprint is a wrapper for fmt.Fprint that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Fprint(w, spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Fprint(w io.Writer, a ...interface{}) (n int, err error) {
|
||||
return fmt.Fprint(w, convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Fprintf is a wrapper for fmt.Fprintf that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Fprintf(w, format, spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error) {
|
||||
return fmt.Fprintf(w, format, convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Fprintln is a wrapper for fmt.Fprintln that treats each argument as if it
|
||||
// passed with a default Formatter interface returned by NewFormatter. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Fprintln(w, spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Fprintln(w io.Writer, a ...interface{}) (n int, err error) {
|
||||
return fmt.Fprintln(w, convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Print is a wrapper for fmt.Print that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Print(spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Print(a ...interface{}) (n int, err error) {
|
||||
return fmt.Print(convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Printf is a wrapper for fmt.Printf that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Printf(format, spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Printf(format string, a ...interface{}) (n int, err error) {
|
||||
return fmt.Printf(format, convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Println is a wrapper for fmt.Println that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the number of bytes written and any write error encountered. See
|
||||
// NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Println(spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Println(a ...interface{}) (n int, err error) {
|
||||
return fmt.Println(convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Sprint is a wrapper for fmt.Sprint that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the resulting string. See NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Sprint(spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Sprint(a ...interface{}) string {
|
||||
return fmt.Sprint(convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Sprintf is a wrapper for fmt.Sprintf that treats each argument as if it were
|
||||
// passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the resulting string. See NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Sprintf(format, spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Sprintf(format string, a ...interface{}) string {
|
||||
return fmt.Sprintf(format, convertArgs(a)...)
|
||||
}
|
||||
|
||||
// Sprintln is a wrapper for fmt.Sprintln that treats each argument as if it
|
||||
// were passed with a default Formatter interface returned by NewFormatter. It
|
||||
// returns the resulting string. See NewFormatter for formatting details.
|
||||
//
|
||||
// This function is shorthand for the following syntax:
|
||||
//
|
||||
// fmt.Sprintln(spew.NewFormatter(a), spew.NewFormatter(b))
|
||||
func Sprintln(a ...interface{}) string {
|
||||
return fmt.Sprintln(convertArgs(a)...)
|
||||
}
|
||||
|
||||
// convertArgs accepts a slice of arguments and returns a slice of the same
|
||||
// length with each argument converted to a default spew Formatter interface.
|
||||
func convertArgs(args []interface{}) (formatters []interface{}) {
|
||||
formatters = make([]interface{}, len(args))
|
||||
for index, arg := range args {
|
||||
formatters[index] = NewFormatter(arg)
|
||||
}
|
||||
return formatters
|
||||
}
|
27
endgamefiles/sourcecode/gobalance/vendor/github.com/pmezard/go-difflib/LICENSE
generated
vendored
Normal file
27
endgamefiles/sourcecode/gobalance/vendor/github.com/pmezard/go-difflib/LICENSE
generated
vendored
Normal file
@ -0,0 +1,27 @@
|
||||
Copyright (c) 2013, Patrick Mezard
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
The names of its contributors may not be used to endorse or promote
|
||||
products derived from this software without specific prior written
|
||||
permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
|
||||
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
|
||||
TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
|
||||
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
|
||||
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
|
||||
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
|
||||
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
|
||||
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
772
endgamefiles/sourcecode/gobalance/vendor/github.com/pmezard/go-difflib/difflib/difflib.go
generated
vendored
Normal file
772
endgamefiles/sourcecode/gobalance/vendor/github.com/pmezard/go-difflib/difflib/difflib.go
generated
vendored
Normal file
@ -0,0 +1,772 @@
|
||||
// Package difflib is a partial port of Python difflib module.
|
||||
//
|
||||
// It provides tools to compare sequences of strings and generate textual diffs.
|
||||
//
|
||||
// The following class and functions have been ported:
|
||||
//
|
||||
// - SequenceMatcher
|
||||
//
|
||||
// - unified_diff
|
||||
//
|
||||
// - context_diff
|
||||
//
|
||||
// Getting unified diffs was the main goal of the port. Keep in mind this code
|
||||
// is mostly suitable to output text differences in a human friendly way, there
|
||||
// are no guarantees generated diffs are consumable by patch(1).
|
||||
package difflib
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func min(a, b int) int {
|
||||
if a < b {
|
||||
return a
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
func max(a, b int) int {
|
||||
if a > b {
|
||||
return a
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
func calculateRatio(matches, length int) float64 {
|
||||
if length > 0 {
|
||||
return 2.0 * float64(matches) / float64(length)
|
||||
}
|
||||
return 1.0
|
||||
}
|
||||
|
||||
type Match struct {
|
||||
A int
|
||||
B int
|
||||
Size int
|
||||
}
|
||||
|
||||
type OpCode struct {
|
||||
Tag byte
|
||||
I1 int
|
||||
I2 int
|
||||
J1 int
|
||||
J2 int
|
||||
}
|
||||
|
||||
// SequenceMatcher compares sequence of strings. The basic
|
||||
// algorithm predates, and is a little fancier than, an algorithm
|
||||
// published in the late 1980's by Ratcliff and Obershelp under the
|
||||
// hyperbolic name "gestalt pattern matching". The basic idea is to find
|
||||
// the longest contiguous matching subsequence that contains no "junk"
|
||||
// elements (R-O doesn't address junk). The same idea is then applied
|
||||
// recursively to the pieces of the sequences to the left and to the right
|
||||
// of the matching subsequence. This does not yield minimal edit
|
||||
// sequences, but does tend to yield matches that "look right" to people.
|
||||
//
|
||||
// SequenceMatcher tries to compute a "human-friendly diff" between two
|
||||
// sequences. Unlike e.g. UNIX(tm) diff, the fundamental notion is the
|
||||
// longest *contiguous* & junk-free matching subsequence. That's what
|
||||
// catches peoples' eyes. The Windows(tm) windiff has another interesting
|
||||
// notion, pairing up elements that appear uniquely in each sequence.
|
||||
// That, and the method here, appear to yield more intuitive difference
|
||||
// reports than does diff. This method appears to be the least vulnerable
|
||||
// to synching up on blocks of "junk lines", though (like blank lines in
|
||||
// ordinary text files, or maybe "<P>" lines in HTML files). That may be
|
||||
// because this is the only method of the 3 that has a *concept* of
|
||||
// "junk" <wink>.
|
||||
//
|
||||
// Timing: Basic R-O is cubic time worst case and quadratic time expected
|
||||
// case. SequenceMatcher is quadratic time for the worst case and has
|
||||
// expected-case behavior dependent in a complicated way on how many
|
||||
// elements the sequences have in common; best case time is linear.
|
||||
type SequenceMatcher struct {
|
||||
a []string
|
||||
b []string
|
||||
b2j map[string][]int
|
||||
IsJunk func(string) bool
|
||||
autoJunk bool
|
||||
bJunk map[string]struct{}
|
||||
matchingBlocks []Match
|
||||
fullBCount map[string]int
|
||||
bPopular map[string]struct{}
|
||||
opCodes []OpCode
|
||||
}
|
||||
|
||||
func NewMatcher(a, b []string) *SequenceMatcher {
|
||||
m := SequenceMatcher{autoJunk: true}
|
||||
m.SetSeqs(a, b)
|
||||
return &m
|
||||
}
|
||||
|
||||
func NewMatcherWithJunk(a, b []string, autoJunk bool,
|
||||
isJunk func(string) bool) *SequenceMatcher {
|
||||
|
||||
m := SequenceMatcher{IsJunk: isJunk, autoJunk: autoJunk}
|
||||
m.SetSeqs(a, b)
|
||||
return &m
|
||||
}
|
||||
|
||||
// Set two sequences to be compared.
|
||||
func (m *SequenceMatcher) SetSeqs(a, b []string) {
|
||||
m.SetSeq1(a)
|
||||
m.SetSeq2(b)
|
||||
}
|
||||
|
||||
// Set the first sequence to be compared. The second sequence to be compared is
|
||||
// not changed.
|
||||
//
|
||||
// SequenceMatcher computes and caches detailed information about the second
|
||||
// sequence, so if you want to compare one sequence S against many sequences,
|
||||
// use .SetSeq2(s) once and call .SetSeq1(x) repeatedly for each of the other
|
||||
// sequences.
|
||||
//
|
||||
// See also SetSeqs() and SetSeq2().
|
||||
func (m *SequenceMatcher) SetSeq1(a []string) {
|
||||
if &a == &m.a {
|
||||
return
|
||||
}
|
||||
m.a = a
|
||||
m.matchingBlocks = nil
|
||||
m.opCodes = nil
|
||||
}
|
||||
|
||||
// Set the second sequence to be compared. The first sequence to be compared is
|
||||
// not changed.
|
||||
func (m *SequenceMatcher) SetSeq2(b []string) {
|
||||
if &b == &m.b {
|
||||
return
|
||||
}
|
||||
m.b = b
|
||||
m.matchingBlocks = nil
|
||||
m.opCodes = nil
|
||||
m.fullBCount = nil
|
||||
m.chainB()
|
||||
}
|
||||
|
||||
func (m *SequenceMatcher) chainB() {
|
||||
// Populate line -> index mapping
|
||||
b2j := map[string][]int{}
|
||||
for i, s := range m.b {
|
||||
indices := b2j[s]
|
||||
indices = append(indices, i)
|
||||
b2j[s] = indices
|
||||
}
|
||||
|
||||
// Purge junk elements
|
||||
m.bJunk = map[string]struct{}{}
|
||||
if m.IsJunk != nil {
|
||||
junk := m.bJunk
|
||||
for s, _ := range b2j {
|
||||
if m.IsJunk(s) {
|
||||
junk[s] = struct{}{}
|
||||
}
|
||||
}
|
||||
for s, _ := range junk {
|
||||
delete(b2j, s)
|
||||
}
|
||||
}
|
||||
|
||||
// Purge remaining popular elements
|
||||
popular := map[string]struct{}{}
|
||||
n := len(m.b)
|
||||
if m.autoJunk && n >= 200 {
|
||||
ntest := n/100 + 1
|
||||
for s, indices := range b2j {
|
||||
if len(indices) > ntest {
|
||||
popular[s] = struct{}{}
|
||||
}
|
||||
}
|
||||
for s, _ := range popular {
|
||||
delete(b2j, s)
|
||||
}
|
||||
}
|
||||
m.bPopular = popular
|
||||
m.b2j = b2j
|
||||
}
|
||||
|
||||
func (m *SequenceMatcher) isBJunk(s string) bool {
|
||||
_, ok := m.bJunk[s]
|
||||
return ok
|
||||
}
|
||||
|
||||
// Find longest matching block in a[alo:ahi] and b[blo:bhi].
|
||||
//
|
||||
// If IsJunk is not defined:
|
||||
//
|
||||
// Return (i,j,k) such that a[i:i+k] is equal to b[j:j+k], where
|
||||
// alo <= i <= i+k <= ahi
|
||||
// blo <= j <= j+k <= bhi
|
||||
// and for all (i',j',k') meeting those conditions,
|
||||
// k >= k'
|
||||
// i <= i'
|
||||
// and if i == i', j <= j'
|
||||
//
|
||||
// In other words, of all maximal matching blocks, return one that
|
||||
// starts earliest in a, and of all those maximal matching blocks that
|
||||
// start earliest in a, return the one that starts earliest in b.
|
||||
//
|
||||
// If IsJunk is defined, first the longest matching block is
|
||||
// determined as above, but with the additional restriction that no
|
||||
// junk element appears in the block. Then that block is extended as
|
||||
// far as possible by matching (only) junk elements on both sides. So
|
||||
// the resulting block never matches on junk except as identical junk
|
||||
// happens to be adjacent to an "interesting" match.
|
||||
//
|
||||
// If no blocks match, return (alo, blo, 0).
|
||||
func (m *SequenceMatcher) findLongestMatch(alo, ahi, blo, bhi int) Match {
|
||||
// CAUTION: stripping common prefix or suffix would be incorrect.
|
||||
// E.g.,
|
||||
// ab
|
||||
// acab
|
||||
// Longest matching block is "ab", but if common prefix is
|
||||
// stripped, it's "a" (tied with "b"). UNIX(tm) diff does so
|
||||
// strip, so ends up claiming that ab is changed to acab by
|
||||
// inserting "ca" in the middle. That's minimal but unintuitive:
|
||||
// "it's obvious" that someone inserted "ac" at the front.
|
||||
// Windiff ends up at the same place as diff, but by pairing up
|
||||
// the unique 'b's and then matching the first two 'a's.
|
||||
besti, bestj, bestsize := alo, blo, 0
|
||||
|
||||
// find longest junk-free match
|
||||
// during an iteration of the loop, j2len[j] = length of longest
|
||||
// junk-free match ending with a[i-1] and b[j]
|
||||
j2len := map[int]int{}
|
||||
for i := alo; i != ahi; i++ {
|
||||
// look at all instances of a[i] in b; note that because
|
||||
// b2j has no junk keys, the loop is skipped if a[i] is junk
|
||||
newj2len := map[int]int{}
|
||||
for _, j := range m.b2j[m.a[i]] {
|
||||
// a[i] matches b[j]
|
||||
if j < blo {
|
||||
continue
|
||||
}
|
||||
if j >= bhi {
|
||||
break
|
||||
}
|
||||
k := j2len[j-1] + 1
|
||||
newj2len[j] = k
|
||||
if k > bestsize {
|
||||
besti, bestj, bestsize = i-k+1, j-k+1, k
|
||||
}
|
||||
}
|
||||
j2len = newj2len
|
||||
}
|
||||
|
||||
// Extend the best by non-junk elements on each end. In particular,
|
||||
// "popular" non-junk elements aren't in b2j, which greatly speeds
|
||||
// the inner loop above, but also means "the best" match so far
|
||||
// doesn't contain any junk *or* popular non-junk elements.
|
||||
for besti > alo && bestj > blo && !m.isBJunk(m.b[bestj-1]) &&
|
||||
m.a[besti-1] == m.b[bestj-1] {
|
||||
besti, bestj, bestsize = besti-1, bestj-1, bestsize+1
|
||||
}
|
||||
for besti+bestsize < ahi && bestj+bestsize < bhi &&
|
||||
!m.isBJunk(m.b[bestj+bestsize]) &&
|
||||
m.a[besti+bestsize] == m.b[bestj+bestsize] {
|
||||
bestsize += 1
|
||||
}
|
||||
|
||||
// Now that we have a wholly interesting match (albeit possibly
|
||||
// empty!), we may as well suck up the matching junk on each
|
||||
// side of it too. Can't think of a good reason not to, and it
|
||||
// saves post-processing the (possibly considerable) expense of
|
||||
// figuring out what to do with it. In the case of an empty
|
||||
// interesting match, this is clearly the right thing to do,
|
||||
// because no other kind of match is possible in the regions.
|
||||
for besti > alo && bestj > blo && m.isBJunk(m.b[bestj-1]) &&
|
||||
m.a[besti-1] == m.b[bestj-1] {
|
||||
besti, bestj, bestsize = besti-1, bestj-1, bestsize+1
|
||||
}
|
||||
for besti+bestsize < ahi && bestj+bestsize < bhi &&
|
||||
m.isBJunk(m.b[bestj+bestsize]) &&
|
||||
m.a[besti+bestsize] == m.b[bestj+bestsize] {
|
||||
bestsize += 1
|
||||
}
|
||||
|
||||
return Match{A: besti, B: bestj, Size: bestsize}
|
||||
}
|
||||
|
||||
// Return list of triples describing matching subsequences.
|
||||
//
|
||||
// Each triple is of the form (i, j, n), and means that
|
||||
// a[i:i+n] == b[j:j+n]. The triples are monotonically increasing in
|
||||
// i and in j. It's also guaranteed that if (i, j, n) and (i', j', n') are
|
||||
// adjacent triples in the list, and the second is not the last triple in the
|
||||
// list, then i+n != i' or j+n != j'. IOW, adjacent triples never describe
|
||||
// adjacent equal blocks.
|
||||
//
|
||||
// The last triple is a dummy, (len(a), len(b), 0), and is the only
|
||||
// triple with n==0.
|
||||
func (m *SequenceMatcher) GetMatchingBlocks() []Match {
|
||||
if m.matchingBlocks != nil {
|
||||
return m.matchingBlocks
|
||||
}
|
||||
|
||||
var matchBlocks func(alo, ahi, blo, bhi int, matched []Match) []Match
|
||||
matchBlocks = func(alo, ahi, blo, bhi int, matched []Match) []Match {
|
||||
match := m.findLongestMatch(alo, ahi, blo, bhi)
|
||||
i, j, k := match.A, match.B, match.Size
|
||||
if match.Size > 0 {
|
||||
if alo < i && blo < j {
|
||||
matched = matchBlocks(alo, i, blo, j, matched)
|
||||
}
|
||||
matched = append(matched, match)
|
||||
if i+k < ahi && j+k < bhi {
|
||||
matched = matchBlocks(i+k, ahi, j+k, bhi, matched)
|
||||
}
|
||||
}
|
||||
return matched
|
||||
}
|
||||
matched := matchBlocks(0, len(m.a), 0, len(m.b), nil)
|
||||
|
||||
// It's possible that we have adjacent equal blocks in the
|
||||
// matching_blocks list now.
|
||||
nonAdjacent := []Match{}
|
||||
i1, j1, k1 := 0, 0, 0
|
||||
for _, b := range matched {
|
||||
// Is this block adjacent to i1, j1, k1?
|
||||
i2, j2, k2 := b.A, b.B, b.Size
|
||||
if i1+k1 == i2 && j1+k1 == j2 {
|
||||
// Yes, so collapse them -- this just increases the length of
|
||||
// the first block by the length of the second, and the first
|
||||
// block so lengthened remains the block to compare against.
|
||||
k1 += k2
|
||||
} else {
|
||||
// Not adjacent. Remember the first block (k1==0 means it's
|
||||
// the dummy we started with), and make the second block the
|
||||
// new block to compare against.
|
||||
if k1 > 0 {
|
||||
nonAdjacent = append(nonAdjacent, Match{i1, j1, k1})
|
||||
}
|
||||
i1, j1, k1 = i2, j2, k2
|
||||
}
|
||||
}
|
||||
if k1 > 0 {
|
||||
nonAdjacent = append(nonAdjacent, Match{i1, j1, k1})
|
||||
}
|
||||
|
||||
nonAdjacent = append(nonAdjacent, Match{len(m.a), len(m.b), 0})
|
||||
m.matchingBlocks = nonAdjacent
|
||||
return m.matchingBlocks
|
||||
}
|
||||
|
||||
// Return list of 5-tuples describing how to turn a into b.
|
||||
//
|
||||
// Each tuple is of the form (tag, i1, i2, j1, j2). The first tuple
|
||||
// has i1 == j1 == 0, and remaining tuples have i1 == the i2 from the
|
||||
// tuple preceding it, and likewise for j1 == the previous j2.
|
||||
//
|
||||
// The tags are characters, with these meanings:
|
||||
//
|
||||
// 'r' (replace): a[i1:i2] should be replaced by b[j1:j2]
|
||||
//
|
||||
// 'd' (delete): a[i1:i2] should be deleted, j1==j2 in this case.
|
||||
//
|
||||
// 'i' (insert): b[j1:j2] should be inserted at a[i1:i1], i1==i2 in this case.
|
||||
//
|
||||
// 'e' (equal): a[i1:i2] == b[j1:j2]
|
||||
func (m *SequenceMatcher) GetOpCodes() []OpCode {
|
||||
if m.opCodes != nil {
|
||||
return m.opCodes
|
||||
}
|
||||
i, j := 0, 0
|
||||
matching := m.GetMatchingBlocks()
|
||||
opCodes := make([]OpCode, 0, len(matching))
|
||||
for _, m := range matching {
|
||||
// invariant: we've pumped out correct diffs to change
|
||||
// a[:i] into b[:j], and the next matching block is
|
||||
// a[ai:ai+size] == b[bj:bj+size]. So we need to pump
|
||||
// out a diff to change a[i:ai] into b[j:bj], pump out
|
||||
// the matching block, and move (i,j) beyond the match
|
||||
ai, bj, size := m.A, m.B, m.Size
|
||||
tag := byte(0)
|
||||
if i < ai && j < bj {
|
||||
tag = 'r'
|
||||
} else if i < ai {
|
||||
tag = 'd'
|
||||
} else if j < bj {
|
||||
tag = 'i'
|
||||
}
|
||||
if tag > 0 {
|
||||
opCodes = append(opCodes, OpCode{tag, i, ai, j, bj})
|
||||
}
|
||||
i, j = ai+size, bj+size
|
||||
// the list of matching blocks is terminated by a
|
||||
// sentinel with size 0
|
||||
if size > 0 {
|
||||
opCodes = append(opCodes, OpCode{'e', ai, i, bj, j})
|
||||
}
|
||||
}
|
||||
m.opCodes = opCodes
|
||||
return m.opCodes
|
||||
}
|
||||
|
||||
// Isolate change clusters by eliminating ranges with no changes.
|
||||
//
|
||||
// Return a generator of groups with up to n lines of context.
|
||||
// Each group is in the same format as returned by GetOpCodes().
|
||||
func (m *SequenceMatcher) GetGroupedOpCodes(n int) [][]OpCode {
|
||||
if n < 0 {
|
||||
n = 3
|
||||
}
|
||||
codes := m.GetOpCodes()
|
||||
if len(codes) == 0 {
|
||||
codes = []OpCode{OpCode{'e', 0, 1, 0, 1}}
|
||||
}
|
||||
// Fixup leading and trailing groups if they show no changes.
|
||||
if codes[0].Tag == 'e' {
|
||||
c := codes[0]
|
||||
i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2
|
||||
codes[0] = OpCode{c.Tag, max(i1, i2-n), i2, max(j1, j2-n), j2}
|
||||
}
|
||||
if codes[len(codes)-1].Tag == 'e' {
|
||||
c := codes[len(codes)-1]
|
||||
i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2
|
||||
codes[len(codes)-1] = OpCode{c.Tag, i1, min(i2, i1+n), j1, min(j2, j1+n)}
|
||||
}
|
||||
nn := n + n
|
||||
groups := [][]OpCode{}
|
||||
group := []OpCode{}
|
||||
for _, c := range codes {
|
||||
i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2
|
||||
// End the current group and start a new one whenever
|
||||
// there is a large range with no changes.
|
||||
if c.Tag == 'e' && i2-i1 > nn {
|
||||
group = append(group, OpCode{c.Tag, i1, min(i2, i1+n),
|
||||
j1, min(j2, j1+n)})
|
||||
groups = append(groups, group)
|
||||
group = []OpCode{}
|
||||
i1, j1 = max(i1, i2-n), max(j1, j2-n)
|
||||
}
|
||||
group = append(group, OpCode{c.Tag, i1, i2, j1, j2})
|
||||
}
|
||||
if len(group) > 0 && !(len(group) == 1 && group[0].Tag == 'e') {
|
||||
groups = append(groups, group)
|
||||
}
|
||||
return groups
|
||||
}
|
||||
|
||||
// Return a measure of the sequences' similarity (float in [0,1]).
|
||||
//
|
||||
// Where T is the total number of elements in both sequences, and
|
||||
// M is the number of matches, this is 2.0*M / T.
|
||||
// Note that this is 1 if the sequences are identical, and 0 if
|
||||
// they have nothing in common.
|
||||
//
|
||||
// .Ratio() is expensive to compute if you haven't already computed
|
||||
// .GetMatchingBlocks() or .GetOpCodes(), in which case you may
|
||||
// want to try .QuickRatio() or .RealQuickRation() first to get an
|
||||
// upper bound.
|
||||
func (m *SequenceMatcher) Ratio() float64 {
|
||||
matches := 0
|
||||
for _, m := range m.GetMatchingBlocks() {
|
||||
matches += m.Size
|
||||
}
|
||||
return calculateRatio(matches, len(m.a)+len(m.b))
|
||||
}
|
||||
|
||||
// Return an upper bound on ratio() relatively quickly.
|
||||
//
|
||||
// This isn't defined beyond that it is an upper bound on .Ratio(), and
|
||||
// is faster to compute.
|
||||
func (m *SequenceMatcher) QuickRatio() float64 {
|
||||
// viewing a and b as multisets, set matches to the cardinality
|
||||
// of their intersection; this counts the number of matches
|
||||
// without regard to order, so is clearly an upper bound
|
||||
if m.fullBCount == nil {
|
||||
m.fullBCount = map[string]int{}
|
||||
for _, s := range m.b {
|
||||
m.fullBCount[s] = m.fullBCount[s] + 1
|
||||
}
|
||||
}
|
||||
|
||||
// avail[x] is the number of times x appears in 'b' less the
|
||||
// number of times we've seen it in 'a' so far ... kinda
|
||||
avail := map[string]int{}
|
||||
matches := 0
|
||||
for _, s := range m.a {
|
||||
n, ok := avail[s]
|
||||
if !ok {
|
||||
n = m.fullBCount[s]
|
||||
}
|
||||
avail[s] = n - 1
|
||||
if n > 0 {
|
||||
matches += 1
|
||||
}
|
||||
}
|
||||
return calculateRatio(matches, len(m.a)+len(m.b))
|
||||
}
|
||||
|
||||
// Return an upper bound on ratio() very quickly.
|
||||
//
|
||||
// This isn't defined beyond that it is an upper bound on .Ratio(), and
|
||||
// is faster to compute than either .Ratio() or .QuickRatio().
|
||||
func (m *SequenceMatcher) RealQuickRatio() float64 {
|
||||
la, lb := len(m.a), len(m.b)
|
||||
return calculateRatio(min(la, lb), la+lb)
|
||||
}
|
||||
|
||||
// Convert range to the "ed" format
|
||||
func formatRangeUnified(start, stop int) string {
|
||||
// Per the diff spec at http://www.unix.org/single_unix_specification/
|
||||
beginning := start + 1 // lines start numbering with one
|
||||
length := stop - start
|
||||
if length == 1 {
|
||||
return fmt.Sprintf("%d", beginning)
|
||||
}
|
||||
if length == 0 {
|
||||
beginning -= 1 // empty ranges begin at line just before the range
|
||||
}
|
||||
return fmt.Sprintf("%d,%d", beginning, length)
|
||||
}
|
||||
|
||||
// Unified diff parameters
|
||||
type UnifiedDiff struct {
|
||||
A []string // First sequence lines
|
||||
FromFile string // First file name
|
||||
FromDate string // First file time
|
||||
B []string // Second sequence lines
|
||||
ToFile string // Second file name
|
||||
ToDate string // Second file time
|
||||
Eol string // Headers end of line, defaults to LF
|
||||
Context int // Number of context lines
|
||||
}
|
||||
|
||||
// Compare two sequences of lines; generate the delta as a unified diff.
|
||||
//
|
||||
// Unified diffs are a compact way of showing line changes and a few
|
||||
// lines of context. The number of context lines is set by 'n' which
|
||||
// defaults to three.
|
||||
//
|
||||
// By default, the diff control lines (those with ---, +++, or @@) are
|
||||
// created with a trailing newline. This is helpful so that inputs
|
||||
// created from file.readlines() result in diffs that are suitable for
|
||||
// file.writelines() since both the inputs and outputs have trailing
|
||||
// newlines.
|
||||
//
|
||||
// For inputs that do not have trailing newlines, set the lineterm
|
||||
// argument to "" so that the output will be uniformly newline free.
|
||||
//
|
||||
// The unidiff format normally has a header for filenames and modification
|
||||
// times. Any or all of these may be specified using strings for
|
||||
// 'fromfile', 'tofile', 'fromfiledate', and 'tofiledate'.
|
||||
// The modification times are normally expressed in the ISO 8601 format.
|
||||
func WriteUnifiedDiff(writer io.Writer, diff UnifiedDiff) error {
|
||||
buf := bufio.NewWriter(writer)
|
||||
defer buf.Flush()
|
||||
wf := func(format string, args ...interface{}) error {
|
||||
_, err := buf.WriteString(fmt.Sprintf(format, args...))
|
||||
return err
|
||||
}
|
||||
ws := func(s string) error {
|
||||
_, err := buf.WriteString(s)
|
||||
return err
|
||||
}
|
||||
|
||||
if len(diff.Eol) == 0 {
|
||||
diff.Eol = "\n"
|
||||
}
|
||||
|
||||
started := false
|
||||
m := NewMatcher(diff.A, diff.B)
|
||||
for _, g := range m.GetGroupedOpCodes(diff.Context) {
|
||||
if !started {
|
||||
started = true
|
||||
fromDate := ""
|
||||
if len(diff.FromDate) > 0 {
|
||||
fromDate = "\t" + diff.FromDate
|
||||
}
|
||||
toDate := ""
|
||||
if len(diff.ToDate) > 0 {
|
||||
toDate = "\t" + diff.ToDate
|
||||
}
|
||||
if diff.FromFile != "" || diff.ToFile != "" {
|
||||
err := wf("--- %s%s%s", diff.FromFile, fromDate, diff.Eol)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = wf("+++ %s%s%s", diff.ToFile, toDate, diff.Eol)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
first, last := g[0], g[len(g)-1]
|
||||
range1 := formatRangeUnified(first.I1, last.I2)
|
||||
range2 := formatRangeUnified(first.J1, last.J2)
|
||||
if err := wf("@@ -%s +%s @@%s", range1, range2, diff.Eol); err != nil {
|
||||
return err
|
||||
}
|
||||
for _, c := range g {
|
||||
i1, i2, j1, j2 := c.I1, c.I2, c.J1, c.J2
|
||||
if c.Tag == 'e' {
|
||||
for _, line := range diff.A[i1:i2] {
|
||||
if err := ws(" " + line); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
continue
|
||||
}
|
||||
if c.Tag == 'r' || c.Tag == 'd' {
|
||||
for _, line := range diff.A[i1:i2] {
|
||||
if err := ws("-" + line); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
if c.Tag == 'r' || c.Tag == 'i' {
|
||||
for _, line := range diff.B[j1:j2] {
|
||||
if err := ws("+" + line); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Like WriteUnifiedDiff but returns the diff a string.
|
||||
func GetUnifiedDiffString(diff UnifiedDiff) (string, error) {
|
||||
w := &bytes.Buffer{}
|
||||
err := WriteUnifiedDiff(w, diff)
|
||||
return string(w.Bytes()), err
|
||||
}
|
||||
|
||||
// Convert range to the "ed" format.
|
||||
func formatRangeContext(start, stop int) string {
|
||||
// Per the diff spec at http://www.unix.org/single_unix_specification/
|
||||
beginning := start + 1 // lines start numbering with one
|
||||
length := stop - start
|
||||
if length == 0 {
|
||||
beginning -= 1 // empty ranges begin at line just before the range
|
||||
}
|
||||
if length <= 1 {
|
||||
return fmt.Sprintf("%d", beginning)
|
||||
}
|
||||
return fmt.Sprintf("%d,%d", beginning, beginning+length-1)
|
||||
}
|
||||
|
||||
type ContextDiff UnifiedDiff
|
||||
|
||||
// Compare two sequences of lines; generate the delta as a context diff.
|
||||
//
|
||||
// Context diffs are a compact way of showing line changes and a few
|
||||
// lines of context. The number of context lines is set by diff.Context
|
||||
// which defaults to three.
|
||||
//
|
||||
// By default, the diff control lines (those with *** or ---) are
|
||||
// created with a trailing newline.
|
||||
//
|
||||
// For inputs that do not have trailing newlines, set the diff.Eol
|
||||
// argument to "" so that the output will be uniformly newline free.
|
||||
//
|
||||
// The context diff format normally has a header for filenames and
|
||||
// modification times. Any or all of these may be specified using
|
||||
// strings for diff.FromFile, diff.ToFile, diff.FromDate, diff.ToDate.
|
||||
// The modification times are normally expressed in the ISO 8601 format.
|
||||
// If not specified, the strings default to blanks.
|
||||
func WriteContextDiff(writer io.Writer, diff ContextDiff) error {
|
||||
buf := bufio.NewWriter(writer)
|
||||
defer buf.Flush()
|
||||
var diffErr error
|
||||
wf := func(format string, args ...interface{}) {
|
||||
_, err := buf.WriteString(fmt.Sprintf(format, args...))
|
||||
if diffErr == nil && err != nil {
|
||||
diffErr = err
|
||||
}
|
||||
}
|
||||
ws := func(s string) {
|
||||
_, err := buf.WriteString(s)
|
||||
if diffErr == nil && err != nil {
|
||||
diffErr = err
|
||||
}
|
||||
}
|
||||
|
||||
if len(diff.Eol) == 0 {
|
||||
diff.Eol = "\n"
|
||||
}
|
||||
|
||||
prefix := map[byte]string{
|
||||
'i': "+ ",
|
||||
'd': "- ",
|
||||
'r': "! ",
|
||||
'e': " ",
|
||||
}
|
||||
|
||||
started := false
|
||||
m := NewMatcher(diff.A, diff.B)
|
||||
for _, g := range m.GetGroupedOpCodes(diff.Context) {
|
||||
if !started {
|
||||
started = true
|
||||
fromDate := ""
|
||||
if len(diff.FromDate) > 0 {
|
||||
fromDate = "\t" + diff.FromDate
|
||||
}
|
||||
toDate := ""
|
||||
if len(diff.ToDate) > 0 {
|
||||
toDate = "\t" + diff.ToDate
|
||||
}
|
||||
if diff.FromFile != "" || diff.ToFile != "" {
|
||||
wf("*** %s%s%s", diff.FromFile, fromDate, diff.Eol)
|
||||
wf("--- %s%s%s", diff.ToFile, toDate, diff.Eol)
|
||||
}
|
||||
}
|
||||
|
||||
first, last := g[0], g[len(g)-1]
|
||||
ws("***************" + diff.Eol)
|
||||
|
||||
range1 := formatRangeContext(first.I1, last.I2)
|
||||
wf("*** %s ****%s", range1, diff.Eol)
|
||||
for _, c := range g {
|
||||
if c.Tag == 'r' || c.Tag == 'd' {
|
||||
for _, cc := range g {
|
||||
if cc.Tag == 'i' {
|
||||
continue
|
||||
}
|
||||
for _, line := range diff.A[cc.I1:cc.I2] {
|
||||
ws(prefix[cc.Tag] + line)
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
range2 := formatRangeContext(first.J1, last.J2)
|
||||
wf("--- %s ----%s", range2, diff.Eol)
|
||||
for _, c := range g {
|
||||
if c.Tag == 'r' || c.Tag == 'i' {
|
||||
for _, cc := range g {
|
||||
if cc.Tag == 'd' {
|
||||
continue
|
||||
}
|
||||
for _, line := range diff.B[cc.J1:cc.J2] {
|
||||
ws(prefix[cc.Tag] + line)
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
return diffErr
|
||||
}
|
||||
|
||||
// Like WriteContextDiff but returns the diff a string.
|
||||
func GetContextDiffString(diff ContextDiff) (string, error) {
|
||||
w := &bytes.Buffer{}
|
||||
err := WriteContextDiff(w, diff)
|
||||
return string(w.Bytes()), err
|
||||
}
|
||||
|
||||
// Split a string on "\n" while preserving them. The output can be used
|
||||
// as input for UnifiedDiff and ContextDiff structures.
|
||||
func SplitLines(s string) []string {
|
||||
lines := strings.SplitAfter(s, "\n")
|
||||
lines[len(lines)-1] += "\n"
|
||||
return lines
|
||||
}
|
8
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/.gitignore
generated
vendored
Normal file
8
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/.gitignore
generated
vendored
Normal file
@ -0,0 +1,8 @@
|
||||
*.out
|
||||
*.swp
|
||||
*.8
|
||||
*.6
|
||||
_obj
|
||||
_test*
|
||||
markdown
|
||||
tags
|
17
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/.travis.yml
generated
vendored
Normal file
17
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/.travis.yml
generated
vendored
Normal file
@ -0,0 +1,17 @@
|
||||
sudo: false
|
||||
language: go
|
||||
go:
|
||||
- "1.10.x"
|
||||
- "1.11.x"
|
||||
- tip
|
||||
matrix:
|
||||
fast_finish: true
|
||||
allow_failures:
|
||||
- go: tip
|
||||
install:
|
||||
- # Do nothing. This is needed to prevent default install action "go get -t -v ./..." from happening here (we want it to happen inside script step).
|
||||
script:
|
||||
- go get -t -v ./...
|
||||
- diff -u <(echo -n) <(gofmt -d -s .)
|
||||
- go tool vet .
|
||||
- go test -v ./...
|
29
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/LICENSE.txt
generated
vendored
Normal file
29
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/LICENSE.txt
generated
vendored
Normal file
@ -0,0 +1,29 @@
|
||||
Blackfriday is distributed under the Simplified BSD License:
|
||||
|
||||
> Copyright © 2011 Russ Ross
|
||||
> All rights reserved.
|
||||
>
|
||||
> Redistribution and use in source and binary forms, with or without
|
||||
> modification, are permitted provided that the following conditions
|
||||
> are met:
|
||||
>
|
||||
> 1. Redistributions of source code must retain the above copyright
|
||||
> notice, this list of conditions and the following disclaimer.
|
||||
>
|
||||
> 2. Redistributions in binary form must reproduce the above
|
||||
> copyright notice, this list of conditions and the following
|
||||
> disclaimer in the documentation and/or other materials provided with
|
||||
> the distribution.
|
||||
>
|
||||
> THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
> "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
> LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
|
||||
> FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
|
||||
> COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
|
||||
> INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
|
||||
> BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
> LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
> CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
> LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
|
||||
> ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
|
||||
> POSSIBILITY OF SUCH DAMAGE.
|
335
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/README.md
generated
vendored
Normal file
335
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/README.md
generated
vendored
Normal file
@ -0,0 +1,335 @@
|
||||
Blackfriday
|
||||
[![Build Status][BuildV2SVG]][BuildV2URL]
|
||||
[![PkgGoDev][PkgGoDevV2SVG]][PkgGoDevV2URL]
|
||||
===========
|
||||
|
||||
Blackfriday is a [Markdown][1] processor implemented in [Go][2]. It
|
||||
is paranoid about its input (so you can safely feed it user-supplied
|
||||
data), it is fast, it supports common extensions (tables, smart
|
||||
punctuation substitutions, etc.), and it is safe for all utf-8
|
||||
(unicode) input.
|
||||
|
||||
HTML output is currently supported, along with Smartypants
|
||||
extensions.
|
||||
|
||||
It started as a translation from C of [Sundown][3].
|
||||
|
||||
|
||||
Installation
|
||||
------------
|
||||
|
||||
Blackfriday is compatible with modern Go releases in module mode.
|
||||
With Go installed:
|
||||
|
||||
go get github.com/russross/blackfriday/v2
|
||||
|
||||
will resolve and add the package to the current development module,
|
||||
then build and install it. Alternatively, you can achieve the same
|
||||
if you import it in a package:
|
||||
|
||||
import "github.com/russross/blackfriday/v2"
|
||||
|
||||
and `go get` without parameters.
|
||||
|
||||
Legacy GOPATH mode is unsupported.
|
||||
|
||||
|
||||
Versions
|
||||
--------
|
||||
|
||||
Currently maintained and recommended version of Blackfriday is `v2`. It's being
|
||||
developed on its own branch: https://github.com/russross/blackfriday/tree/v2 and the
|
||||
documentation is available at
|
||||
https://pkg.go.dev/github.com/russross/blackfriday/v2.
|
||||
|
||||
It is `go get`-able in module mode at `github.com/russross/blackfriday/v2`.
|
||||
|
||||
Version 2 offers a number of improvements over v1:
|
||||
|
||||
* Cleaned up API
|
||||
* A separate call to [`Parse`][4], which produces an abstract syntax tree for
|
||||
the document
|
||||
* Latest bug fixes
|
||||
* Flexibility to easily add your own rendering extensions
|
||||
|
||||
Potential drawbacks:
|
||||
|
||||
* Our benchmarks show v2 to be slightly slower than v1. Currently in the
|
||||
ballpark of around 15%.
|
||||
* API breakage. If you can't afford modifying your code to adhere to the new API
|
||||
and don't care too much about the new features, v2 is probably not for you.
|
||||
* Several bug fixes are trailing behind and still need to be forward-ported to
|
||||
v2. See issue [#348](https://github.com/russross/blackfriday/issues/348) for
|
||||
tracking.
|
||||
|
||||
If you are still interested in the legacy `v1`, you can import it from
|
||||
`github.com/russross/blackfriday`. Documentation for the legacy v1 can be found
|
||||
here: https://pkg.go.dev/github.com/russross/blackfriday.
|
||||
|
||||
|
||||
Usage
|
||||
-----
|
||||
|
||||
For the most sensible markdown processing, it is as simple as getting your input
|
||||
into a byte slice and calling:
|
||||
|
||||
```go
|
||||
output := blackfriday.Run(input)
|
||||
```
|
||||
|
||||
Your input will be parsed and the output rendered with a set of most popular
|
||||
extensions enabled. If you want the most basic feature set, corresponding with
|
||||
the bare Markdown specification, use:
|
||||
|
||||
```go
|
||||
output := blackfriday.Run(input, blackfriday.WithNoExtensions())
|
||||
```
|
||||
|
||||
### Sanitize untrusted content
|
||||
|
||||
Blackfriday itself does nothing to protect against malicious content. If you are
|
||||
dealing with user-supplied markdown, we recommend running Blackfriday's output
|
||||
through HTML sanitizer such as [Bluemonday][5].
|
||||
|
||||
Here's an example of simple usage of Blackfriday together with Bluemonday:
|
||||
|
||||
```go
|
||||
import (
|
||||
"github.com/microcosm-cc/bluemonday"
|
||||
"github.com/russross/blackfriday/v2"
|
||||
)
|
||||
|
||||
// ...
|
||||
unsafe := blackfriday.Run(input)
|
||||
html := bluemonday.UGCPolicy().SanitizeBytes(unsafe)
|
||||
```
|
||||
|
||||
### Custom options
|
||||
|
||||
If you want to customize the set of options, use `blackfriday.WithExtensions`,
|
||||
`blackfriday.WithRenderer` and `blackfriday.WithRefOverride`.
|
||||
|
||||
### `blackfriday-tool`
|
||||
|
||||
You can also check out `blackfriday-tool` for a more complete example
|
||||
of how to use it. Download and install it using:
|
||||
|
||||
go get github.com/russross/blackfriday-tool
|
||||
|
||||
This is a simple command-line tool that allows you to process a
|
||||
markdown file using a standalone program. You can also browse the
|
||||
source directly on github if you are just looking for some example
|
||||
code:
|
||||
|
||||
* <https://github.com/russross/blackfriday-tool>
|
||||
|
||||
Note that if you have not already done so, installing
|
||||
`blackfriday-tool` will be sufficient to download and install
|
||||
blackfriday in addition to the tool itself. The tool binary will be
|
||||
installed in `$GOPATH/bin`. This is a statically-linked binary that
|
||||
can be copied to wherever you need it without worrying about
|
||||
dependencies and library versions.
|
||||
|
||||
### Sanitized anchor names
|
||||
|
||||
Blackfriday includes an algorithm for creating sanitized anchor names
|
||||
corresponding to a given input text. This algorithm is used to create
|
||||
anchors for headings when `AutoHeadingIDs` extension is enabled. The
|
||||
algorithm has a specification, so that other packages can create
|
||||
compatible anchor names and links to those anchors.
|
||||
|
||||
The specification is located at https://pkg.go.dev/github.com/russross/blackfriday/v2#hdr-Sanitized_Anchor_Names.
|
||||
|
||||
[`SanitizedAnchorName`](https://pkg.go.dev/github.com/russross/blackfriday/v2#SanitizedAnchorName) exposes this functionality, and can be used to
|
||||
create compatible links to the anchor names generated by blackfriday.
|
||||
This algorithm is also implemented in a small standalone package at
|
||||
[`github.com/shurcooL/sanitized_anchor_name`](https://pkg.go.dev/github.com/shurcooL/sanitized_anchor_name). It can be useful for clients
|
||||
that want a small package and don't need full functionality of blackfriday.
|
||||
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
All features of Sundown are supported, including:
|
||||
|
||||
* **Compatibility**. The Markdown v1.0.3 test suite passes with
|
||||
the `--tidy` option. Without `--tidy`, the differences are
|
||||
mostly in whitespace and entity escaping, where blackfriday is
|
||||
more consistent and cleaner.
|
||||
|
||||
* **Common extensions**, including table support, fenced code
|
||||
blocks, autolinks, strikethroughs, non-strict emphasis, etc.
|
||||
|
||||
* **Safety**. Blackfriday is paranoid when parsing, making it safe
|
||||
to feed untrusted user input without fear of bad things
|
||||
happening. The test suite stress tests this and there are no
|
||||
known inputs that make it crash. If you find one, please let me
|
||||
know and send me the input that does it.
|
||||
|
||||
NOTE: "safety" in this context means *runtime safety only*. In order to
|
||||
protect yourself against JavaScript injection in untrusted content, see
|
||||
[this example](https://github.com/russross/blackfriday#sanitize-untrusted-content).
|
||||
|
||||
* **Fast processing**. It is fast enough to render on-demand in
|
||||
most web applications without having to cache the output.
|
||||
|
||||
* **Thread safety**. You can run multiple parsers in different
|
||||
goroutines without ill effect. There is no dependence on global
|
||||
shared state.
|
||||
|
||||
* **Minimal dependencies**. Blackfriday only depends on standard
|
||||
library packages in Go. The source code is pretty
|
||||
self-contained, so it is easy to add to any project, including
|
||||
Google App Engine projects.
|
||||
|
||||
* **Standards compliant**. Output successfully validates using the
|
||||
W3C validation tool for HTML 4.01 and XHTML 1.0 Transitional.
|
||||
|
||||
|
||||
Extensions
|
||||
----------
|
||||
|
||||
In addition to the standard markdown syntax, this package
|
||||
implements the following extensions:
|
||||
|
||||
* **Intra-word emphasis supression**. The `_` character is
|
||||
commonly used inside words when discussing code, so having
|
||||
markdown interpret it as an emphasis command is usually the
|
||||
wrong thing. Blackfriday lets you treat all emphasis markers as
|
||||
normal characters when they occur inside a word.
|
||||
|
||||
* **Tables**. Tables can be created by drawing them in the input
|
||||
using a simple syntax:
|
||||
|
||||
```
|
||||
Name | Age
|
||||
--------|------
|
||||
Bob | 27
|
||||
Alice | 23
|
||||
```
|
||||
|
||||
* **Fenced code blocks**. In addition to the normal 4-space
|
||||
indentation to mark code blocks, you can explicitly mark them
|
||||
and supply a language (to make syntax highlighting simple). Just
|
||||
mark it like this:
|
||||
|
||||
```go
|
||||
func getTrue() bool {
|
||||
return true
|
||||
}
|
||||
```
|
||||
|
||||
You can use 3 or more backticks to mark the beginning of the
|
||||
block, and the same number to mark the end of the block.
|
||||
|
||||
To preserve classes of fenced code blocks while using the bluemonday
|
||||
HTML sanitizer, use the following policy:
|
||||
|
||||
```go
|
||||
p := bluemonday.UGCPolicy()
|
||||
p.AllowAttrs("class").Matching(regexp.MustCompile("^language-[a-zA-Z0-9]+$")).OnElements("code")
|
||||
html := p.SanitizeBytes(unsafe)
|
||||
```
|
||||
|
||||
* **Definition lists**. A simple definition list is made of a single-line
|
||||
term followed by a colon and the definition for that term.
|
||||
|
||||
Cat
|
||||
: Fluffy animal everyone likes
|
||||
|
||||
Internet
|
||||
: Vector of transmission for pictures of cats
|
||||
|
||||
Terms must be separated from the previous definition by a blank line.
|
||||
|
||||
* **Footnotes**. A marker in the text that will become a superscript number;
|
||||
a footnote definition that will be placed in a list of footnotes at the
|
||||
end of the document. A footnote looks like this:
|
||||
|
||||
This is a footnote.[^1]
|
||||
|
||||
[^1]: the footnote text.
|
||||
|
||||
* **Autolinking**. Blackfriday can find URLs that have not been
|
||||
explicitly marked as links and turn them into links.
|
||||
|
||||
* **Strikethrough**. Use two tildes (`~~`) to mark text that
|
||||
should be crossed out.
|
||||
|
||||
* **Hard line breaks**. With this extension enabled newlines in the input
|
||||
translate into line breaks in the output. This extension is off by default.
|
||||
|
||||
* **Smart quotes**. Smartypants-style punctuation substitution is
|
||||
supported, turning normal double- and single-quote marks into
|
||||
curly quotes, etc.
|
||||
|
||||
* **LaTeX-style dash parsing** is an additional option, where `--`
|
||||
is translated into `–`, and `---` is translated into
|
||||
`—`. This differs from most smartypants processors, which
|
||||
turn a single hyphen into an ndash and a double hyphen into an
|
||||
mdash.
|
||||
|
||||
* **Smart fractions**, where anything that looks like a fraction
|
||||
is translated into suitable HTML (instead of just a few special
|
||||
cases like most smartypant processors). For example, `4/5`
|
||||
becomes `<sup>4</sup>⁄<sub>5</sub>`, which renders as
|
||||
<sup>4</sup>⁄<sub>5</sub>.
|
||||
|
||||
|
||||
Other renderers
|
||||
---------------
|
||||
|
||||
Blackfriday is structured to allow alternative rendering engines. Here
|
||||
are a few of note:
|
||||
|
||||
* [github_flavored_markdown](https://pkg.go.dev/github.com/shurcooL/github_flavored_markdown):
|
||||
provides a GitHub Flavored Markdown renderer with fenced code block
|
||||
highlighting, clickable heading anchor links.
|
||||
|
||||
It's not customizable, and its goal is to produce HTML output
|
||||
equivalent to the [GitHub Markdown API endpoint](https://developer.github.com/v3/markdown/#render-a-markdown-document-in-raw-mode),
|
||||
except the rendering is performed locally.
|
||||
|
||||
* [markdownfmt](https://github.com/shurcooL/markdownfmt): like gofmt,
|
||||
but for markdown.
|
||||
|
||||
* [LaTeX output](https://gitlab.com/ambrevar/blackfriday-latex):
|
||||
renders output as LaTeX.
|
||||
|
||||
* [bfchroma](https://github.com/Depado/bfchroma/): provides convenience
|
||||
integration with the [Chroma](https://github.com/alecthomas/chroma) code
|
||||
highlighting library. bfchroma is only compatible with v2 of Blackfriday and
|
||||
provides a drop-in renderer ready to use with Blackfriday, as well as
|
||||
options and means for further customization.
|
||||
|
||||
* [Blackfriday-Confluence](https://github.com/kentaro-m/blackfriday-confluence): provides a [Confluence Wiki Markup](https://confluence.atlassian.com/doc/confluence-wiki-markup-251003035.html) renderer.
|
||||
|
||||
* [Blackfriday-Slack](https://github.com/karriereat/blackfriday-slack): converts markdown to slack message style
|
||||
|
||||
|
||||
TODO
|
||||
----
|
||||
|
||||
* More unit testing
|
||||
* Improve Unicode support. It does not understand all Unicode
|
||||
rules (about what constitutes a letter, a punctuation symbol,
|
||||
etc.), so it may fail to detect word boundaries correctly in
|
||||
some instances. It is safe on all UTF-8 input.
|
||||
|
||||
|
||||
License
|
||||
-------
|
||||
|
||||
[Blackfriday is distributed under the Simplified BSD License](LICENSE.txt)
|
||||
|
||||
|
||||
[1]: https://daringfireball.net/projects/markdown/ "Markdown"
|
||||
[2]: https://golang.org/ "Go Language"
|
||||
[3]: https://github.com/vmg/sundown "Sundown"
|
||||
[4]: https://pkg.go.dev/github.com/russross/blackfriday/v2#Parse "Parse func"
|
||||
[5]: https://github.com/microcosm-cc/bluemonday "Bluemonday"
|
||||
|
||||
[BuildV2SVG]: https://travis-ci.org/russross/blackfriday.svg?branch=v2
|
||||
[BuildV2URL]: https://travis-ci.org/russross/blackfriday
|
||||
[PkgGoDevV2SVG]: https://pkg.go.dev/badge/github.com/russross/blackfriday/v2
|
||||
[PkgGoDevV2URL]: https://pkg.go.dev/github.com/russross/blackfriday/v2
|
1612
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/block.go
generated
vendored
Normal file
1612
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/block.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
46
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/doc.go
generated
vendored
Normal file
46
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/doc.go
generated
vendored
Normal file
@ -0,0 +1,46 @@
|
||||
// Package blackfriday is a markdown processor.
|
||||
//
|
||||
// It translates plain text with simple formatting rules into an AST, which can
|
||||
// then be further processed to HTML (provided by Blackfriday itself) or other
|
||||
// formats (provided by the community).
|
||||
//
|
||||
// The simplest way to invoke Blackfriday is to call the Run function. It will
|
||||
// take a text input and produce a text output in HTML (or other format).
|
||||
//
|
||||
// A slightly more sophisticated way to use Blackfriday is to create a Markdown
|
||||
// processor and to call Parse, which returns a syntax tree for the input
|
||||
// document. You can leverage Blackfriday's parsing for content extraction from
|
||||
// markdown documents. You can assign a custom renderer and set various options
|
||||
// to the Markdown processor.
|
||||
//
|
||||
// If you're interested in calling Blackfriday from command line, see
|
||||
// https://github.com/russross/blackfriday-tool.
|
||||
//
|
||||
// Sanitized Anchor Names
|
||||
//
|
||||
// Blackfriday includes an algorithm for creating sanitized anchor names
|
||||
// corresponding to a given input text. This algorithm is used to create
|
||||
// anchors for headings when AutoHeadingIDs extension is enabled. The
|
||||
// algorithm is specified below, so that other packages can create
|
||||
// compatible anchor names and links to those anchors.
|
||||
//
|
||||
// The algorithm iterates over the input text, interpreted as UTF-8,
|
||||
// one Unicode code point (rune) at a time. All runes that are letters (category L)
|
||||
// or numbers (category N) are considered valid characters. They are mapped to
|
||||
// lower case, and included in the output. All other runes are considered
|
||||
// invalid characters. Invalid characters that precede the first valid character,
|
||||
// as well as invalid character that follow the last valid character
|
||||
// are dropped completely. All other sequences of invalid characters
|
||||
// between two valid characters are replaced with a single dash character '-'.
|
||||
//
|
||||
// SanitizedAnchorName exposes this functionality, and can be used to
|
||||
// create compatible links to the anchor names generated by blackfriday.
|
||||
// This algorithm is also implemented in a small standalone package at
|
||||
// github.com/shurcooL/sanitized_anchor_name. It can be useful for clients
|
||||
// that want a small package and don't need full functionality of blackfriday.
|
||||
package blackfriday
|
||||
|
||||
// NOTE: Keep Sanitized Anchor Name algorithm in sync with package
|
||||
// github.com/shurcooL/sanitized_anchor_name.
|
||||
// Otherwise, users of sanitized_anchor_name will get anchor names
|
||||
// that are incompatible with those generated by blackfriday.
|
2236
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/entities.go
generated
vendored
Normal file
2236
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/entities.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
70
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/esc.go
generated
vendored
Normal file
70
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/esc.go
generated
vendored
Normal file
@ -0,0 +1,70 @@
|
||||
package blackfriday
|
||||
|
||||
import (
|
||||
"html"
|
||||
"io"
|
||||
)
|
||||
|
||||
var htmlEscaper = [256][]byte{
|
||||
'&': []byte("&"),
|
||||
'<': []byte("<"),
|
||||
'>': []byte(">"),
|
||||
'"': []byte("""),
|
||||
}
|
||||
|
||||
func escapeHTML(w io.Writer, s []byte) {
|
||||
escapeEntities(w, s, false)
|
||||
}
|
||||
|
||||
func escapeAllHTML(w io.Writer, s []byte) {
|
||||
escapeEntities(w, s, true)
|
||||
}
|
||||
|
||||
func escapeEntities(w io.Writer, s []byte, escapeValidEntities bool) {
|
||||
var start, end int
|
||||
for end < len(s) {
|
||||
escSeq := htmlEscaper[s[end]]
|
||||
if escSeq != nil {
|
||||
isEntity, entityEnd := nodeIsEntity(s, end)
|
||||
if isEntity && !escapeValidEntities {
|
||||
w.Write(s[start : entityEnd+1])
|
||||
start = entityEnd + 1
|
||||
} else {
|
||||
w.Write(s[start:end])
|
||||
w.Write(escSeq)
|
||||
start = end + 1
|
||||
}
|
||||
}
|
||||
end++
|
||||
}
|
||||
if start < len(s) && end <= len(s) {
|
||||
w.Write(s[start:end])
|
||||
}
|
||||
}
|
||||
|
||||
func nodeIsEntity(s []byte, end int) (isEntity bool, endEntityPos int) {
|
||||
isEntity = false
|
||||
endEntityPos = end + 1
|
||||
|
||||
if s[end] == '&' {
|
||||
for endEntityPos < len(s) {
|
||||
if s[endEntityPos] == ';' {
|
||||
if entities[string(s[end:endEntityPos+1])] {
|
||||
isEntity = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !isalnum(s[endEntityPos]) && s[endEntityPos] != '&' && s[endEntityPos] != '#' {
|
||||
break
|
||||
}
|
||||
endEntityPos++
|
||||
}
|
||||
}
|
||||
|
||||
return isEntity, endEntityPos
|
||||
}
|
||||
|
||||
func escLink(w io.Writer, text []byte) {
|
||||
unesc := html.UnescapeString(string(text))
|
||||
escapeHTML(w, []byte(unesc))
|
||||
}
|
952
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/html.go
generated
vendored
Normal file
952
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/html.go
generated
vendored
Normal file
@ -0,0 +1,952 @@
|
||||
//
|
||||
// Blackfriday Markdown Processor
|
||||
// Available at http://github.com/russross/blackfriday
|
||||
//
|
||||
// Copyright © 2011 Russ Ross <russ@russross.com>.
|
||||
// Distributed under the Simplified BSD License.
|
||||
// See README.md for details.
|
||||
//
|
||||
|
||||
//
|
||||
//
|
||||
// HTML rendering backend
|
||||
//
|
||||
//
|
||||
|
||||
package blackfriday
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// HTMLFlags control optional behavior of HTML renderer.
|
||||
type HTMLFlags int
|
||||
|
||||
// HTML renderer configuration options.
|
||||
const (
|
||||
HTMLFlagsNone HTMLFlags = 0
|
||||
SkipHTML HTMLFlags = 1 << iota // Skip preformatted HTML blocks
|
||||
SkipImages // Skip embedded images
|
||||
SkipLinks // Skip all links
|
||||
Safelink // Only link to trusted protocols
|
||||
NofollowLinks // Only link with rel="nofollow"
|
||||
NoreferrerLinks // Only link with rel="noreferrer"
|
||||
NoopenerLinks // Only link with rel="noopener"
|
||||
HrefTargetBlank // Add a blank target
|
||||
CompletePage // Generate a complete HTML page
|
||||
UseXHTML // Generate XHTML output instead of HTML
|
||||
FootnoteReturnLinks // Generate a link at the end of a footnote to return to the source
|
||||
Smartypants // Enable smart punctuation substitutions
|
||||
SmartypantsFractions // Enable smart fractions (with Smartypants)
|
||||
SmartypantsDashes // Enable smart dashes (with Smartypants)
|
||||
SmartypantsLatexDashes // Enable LaTeX-style dashes (with Smartypants)
|
||||
SmartypantsAngledQuotes // Enable angled double quotes (with Smartypants) for double quotes rendering
|
||||
SmartypantsQuotesNBSP // Enable « French guillemets » (with Smartypants)
|
||||
TOC // Generate a table of contents
|
||||
)
|
||||
|
||||
var (
|
||||
htmlTagRe = regexp.MustCompile("(?i)^" + htmlTag)
|
||||
)
|
||||
|
||||
const (
|
||||
htmlTag = "(?:" + openTag + "|" + closeTag + "|" + htmlComment + "|" +
|
||||
processingInstruction + "|" + declaration + "|" + cdata + ")"
|
||||
closeTag = "</" + tagName + "\\s*[>]"
|
||||
openTag = "<" + tagName + attribute + "*" + "\\s*/?>"
|
||||
attribute = "(?:" + "\\s+" + attributeName + attributeValueSpec + "?)"
|
||||
attributeValue = "(?:" + unquotedValue + "|" + singleQuotedValue + "|" + doubleQuotedValue + ")"
|
||||
attributeValueSpec = "(?:" + "\\s*=" + "\\s*" + attributeValue + ")"
|
||||
attributeName = "[a-zA-Z_:][a-zA-Z0-9:._-]*"
|
||||
cdata = "<!\\[CDATA\\[[\\s\\S]*?\\]\\]>"
|
||||
declaration = "<![A-Z]+" + "\\s+[^>]*>"
|
||||
doubleQuotedValue = "\"[^\"]*\""
|
||||
htmlComment = "<!---->|<!--(?:-?[^>-])(?:-?[^-])*-->"
|
||||
processingInstruction = "[<][?].*?[?][>]"
|
||||
singleQuotedValue = "'[^']*'"
|
||||
tagName = "[A-Za-z][A-Za-z0-9-]*"
|
||||
unquotedValue = "[^\"'=<>`\\x00-\\x20]+"
|
||||
)
|
||||
|
||||
// HTMLRendererParameters is a collection of supplementary parameters tweaking
|
||||
// the behavior of various parts of HTML renderer.
|
||||
type HTMLRendererParameters struct {
|
||||
// Prepend this text to each relative URL.
|
||||
AbsolutePrefix string
|
||||
// Add this text to each footnote anchor, to ensure uniqueness.
|
||||
FootnoteAnchorPrefix string
|
||||
// Show this text inside the <a> tag for a footnote return link, if the
|
||||
// HTML_FOOTNOTE_RETURN_LINKS flag is enabled. If blank, the string
|
||||
// <sup>[return]</sup> is used.
|
||||
FootnoteReturnLinkContents string
|
||||
// If set, add this text to the front of each Heading ID, to ensure
|
||||
// uniqueness.
|
||||
HeadingIDPrefix string
|
||||
// If set, add this text to the back of each Heading ID, to ensure uniqueness.
|
||||
HeadingIDSuffix string
|
||||
// Increase heading levels: if the offset is 1, <h1> becomes <h2> etc.
|
||||
// Negative offset is also valid.
|
||||
// Resulting levels are clipped between 1 and 6.
|
||||
HeadingLevelOffset int
|
||||
|
||||
Title string // Document title (used if CompletePage is set)
|
||||
CSS string // Optional CSS file URL (used if CompletePage is set)
|
||||
Icon string // Optional icon file URL (used if CompletePage is set)
|
||||
|
||||
Flags HTMLFlags // Flags allow customizing this renderer's behavior
|
||||
}
|
||||
|
||||
// HTMLRenderer is a type that implements the Renderer interface for HTML output.
|
||||
//
|
||||
// Do not create this directly, instead use the NewHTMLRenderer function.
|
||||
type HTMLRenderer struct {
|
||||
HTMLRendererParameters
|
||||
|
||||
closeTag string // how to end singleton tags: either " />" or ">"
|
||||
|
||||
// Track heading IDs to prevent ID collision in a single generation.
|
||||
headingIDs map[string]int
|
||||
|
||||
lastOutputLen int
|
||||
disableTags int
|
||||
|
||||
sr *SPRenderer
|
||||
}
|
||||
|
||||
const (
|
||||
xhtmlClose = " />"
|
||||
htmlClose = ">"
|
||||
)
|
||||
|
||||
// NewHTMLRenderer creates and configures an HTMLRenderer object, which
|
||||
// satisfies the Renderer interface.
|
||||
func NewHTMLRenderer(params HTMLRendererParameters) *HTMLRenderer {
|
||||
// configure the rendering engine
|
||||
closeTag := htmlClose
|
||||
if params.Flags&UseXHTML != 0 {
|
||||
closeTag = xhtmlClose
|
||||
}
|
||||
|
||||
if params.FootnoteReturnLinkContents == "" {
|
||||
// U+FE0E is VARIATION SELECTOR-15.
|
||||
// It suppresses automatic emoji presentation of the preceding
|
||||
// U+21A9 LEFTWARDS ARROW WITH HOOK on iOS and iPadOS.
|
||||
params.FootnoteReturnLinkContents = "<span aria-label='Return'>↩\ufe0e</span>"
|
||||
}
|
||||
|
||||
return &HTMLRenderer{
|
||||
HTMLRendererParameters: params,
|
||||
|
||||
closeTag: closeTag,
|
||||
headingIDs: make(map[string]int),
|
||||
|
||||
sr: NewSmartypantsRenderer(params.Flags),
|
||||
}
|
||||
}
|
||||
|
||||
func isHTMLTag(tag []byte, tagname string) bool {
|
||||
found, _ := findHTMLTagPos(tag, tagname)
|
||||
return found
|
||||
}
|
||||
|
||||
// Look for a character, but ignore it when it's in any kind of quotes, it
|
||||
// might be JavaScript
|
||||
func skipUntilCharIgnoreQuotes(html []byte, start int, char byte) int {
|
||||
inSingleQuote := false
|
||||
inDoubleQuote := false
|
||||
inGraveQuote := false
|
||||
i := start
|
||||
for i < len(html) {
|
||||
switch {
|
||||
case html[i] == char && !inSingleQuote && !inDoubleQuote && !inGraveQuote:
|
||||
return i
|
||||
case html[i] == '\'':
|
||||
inSingleQuote = !inSingleQuote
|
||||
case html[i] == '"':
|
||||
inDoubleQuote = !inDoubleQuote
|
||||
case html[i] == '`':
|
||||
inGraveQuote = !inGraveQuote
|
||||
}
|
||||
i++
|
||||
}
|
||||
return start
|
||||
}
|
||||
|
||||
func findHTMLTagPos(tag []byte, tagname string) (bool, int) {
|
||||
i := 0
|
||||
if i < len(tag) && tag[0] != '<' {
|
||||
return false, -1
|
||||
}
|
||||
i++
|
||||
i = skipSpace(tag, i)
|
||||
|
||||
if i < len(tag) && tag[i] == '/' {
|
||||
i++
|
||||
}
|
||||
|
||||
i = skipSpace(tag, i)
|
||||
j := 0
|
||||
for ; i < len(tag); i, j = i+1, j+1 {
|
||||
if j >= len(tagname) {
|
||||
break
|
||||
}
|
||||
|
||||
if strings.ToLower(string(tag[i]))[0] != tagname[j] {
|
||||
return false, -1
|
||||
}
|
||||
}
|
||||
|
||||
if i == len(tag) {
|
||||
return false, -1
|
||||
}
|
||||
|
||||
rightAngle := skipUntilCharIgnoreQuotes(tag, i, '>')
|
||||
if rightAngle >= i {
|
||||
return true, rightAngle
|
||||
}
|
||||
|
||||
return false, -1
|
||||
}
|
||||
|
||||
func skipSpace(tag []byte, i int) int {
|
||||
for i < len(tag) && isspace(tag[i]) {
|
||||
i++
|
||||
}
|
||||
return i
|
||||
}
|
||||
|
||||
func isRelativeLink(link []byte) (yes bool) {
|
||||
// a tag begin with '#'
|
||||
if link[0] == '#' {
|
||||
return true
|
||||
}
|
||||
|
||||
// link begin with '/' but not '//', the second maybe a protocol relative link
|
||||
if len(link) >= 2 && link[0] == '/' && link[1] != '/' {
|
||||
return true
|
||||
}
|
||||
|
||||
// only the root '/'
|
||||
if len(link) == 1 && link[0] == '/' {
|
||||
return true
|
||||
}
|
||||
|
||||
// current directory : begin with "./"
|
||||
if bytes.HasPrefix(link, []byte("./")) {
|
||||
return true
|
||||
}
|
||||
|
||||
// parent directory : begin with "../"
|
||||
if bytes.HasPrefix(link, []byte("../")) {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func (r *HTMLRenderer) ensureUniqueHeadingID(id string) string {
|
||||
for count, found := r.headingIDs[id]; found; count, found = r.headingIDs[id] {
|
||||
tmp := fmt.Sprintf("%s-%d", id, count+1)
|
||||
|
||||
if _, tmpFound := r.headingIDs[tmp]; !tmpFound {
|
||||
r.headingIDs[id] = count + 1
|
||||
id = tmp
|
||||
} else {
|
||||
id = id + "-1"
|
||||
}
|
||||
}
|
||||
|
||||
if _, found := r.headingIDs[id]; !found {
|
||||
r.headingIDs[id] = 0
|
||||
}
|
||||
|
||||
return id
|
||||
}
|
||||
|
||||
func (r *HTMLRenderer) addAbsPrefix(link []byte) []byte {
|
||||
if r.AbsolutePrefix != "" && isRelativeLink(link) && link[0] != '.' {
|
||||
newDest := r.AbsolutePrefix
|
||||
if link[0] != '/' {
|
||||
newDest += "/"
|
||||
}
|
||||
newDest += string(link)
|
||||
return []byte(newDest)
|
||||
}
|
||||
return link
|
||||
}
|
||||
|
||||
func appendLinkAttrs(attrs []string, flags HTMLFlags, link []byte) []string {
|
||||
if isRelativeLink(link) {
|
||||
return attrs
|
||||
}
|
||||
val := []string{}
|
||||
if flags&NofollowLinks != 0 {
|
||||
val = append(val, "nofollow")
|
||||
}
|
||||
if flags&NoreferrerLinks != 0 {
|
||||
val = append(val, "noreferrer")
|
||||
}
|
||||
if flags&NoopenerLinks != 0 {
|
||||
val = append(val, "noopener")
|
||||
}
|
||||
if flags&HrefTargetBlank != 0 {
|
||||
attrs = append(attrs, "target=\"_blank\"")
|
||||
}
|
||||
if len(val) == 0 {
|
||||
return attrs
|
||||
}
|
||||
attr := fmt.Sprintf("rel=%q", strings.Join(val, " "))
|
||||
return append(attrs, attr)
|
||||
}
|
||||
|
||||
func isMailto(link []byte) bool {
|
||||
return bytes.HasPrefix(link, []byte("mailto:"))
|
||||
}
|
||||
|
||||
func needSkipLink(flags HTMLFlags, dest []byte) bool {
|
||||
if flags&SkipLinks != 0 {
|
||||
return true
|
||||
}
|
||||
return flags&Safelink != 0 && !isSafeLink(dest) && !isMailto(dest)
|
||||
}
|
||||
|
||||
func isSmartypantable(node *Node) bool {
|
||||
pt := node.Parent.Type
|
||||
return pt != Link && pt != CodeBlock && pt != Code
|
||||
}
|
||||
|
||||
func appendLanguageAttr(attrs []string, info []byte) []string {
|
||||
if len(info) == 0 {
|
||||
return attrs
|
||||
}
|
||||
endOfLang := bytes.IndexAny(info, "\t ")
|
||||
if endOfLang < 0 {
|
||||
endOfLang = len(info)
|
||||
}
|
||||
return append(attrs, fmt.Sprintf("class=\"language-%s\"", info[:endOfLang]))
|
||||
}
|
||||
|
||||
func (r *HTMLRenderer) tag(w io.Writer, name []byte, attrs []string) {
|
||||
w.Write(name)
|
||||
if len(attrs) > 0 {
|
||||
w.Write(spaceBytes)
|
||||
w.Write([]byte(strings.Join(attrs, " ")))
|
||||
}
|
||||
w.Write(gtBytes)
|
||||
r.lastOutputLen = 1
|
||||
}
|
||||
|
||||
func footnoteRef(prefix string, node *Node) []byte {
|
||||
urlFrag := prefix + string(slugify(node.Destination))
|
||||
anchor := fmt.Sprintf(`<a href="#fn:%s">%d</a>`, urlFrag, node.NoteID)
|
||||
return []byte(fmt.Sprintf(`<sup class="footnote-ref" id="fnref:%s">%s</sup>`, urlFrag, anchor))
|
||||
}
|
||||
|
||||
func footnoteItem(prefix string, slug []byte) []byte {
|
||||
return []byte(fmt.Sprintf(`<li id="fn:%s%s">`, prefix, slug))
|
||||
}
|
||||
|
||||
func footnoteReturnLink(prefix, returnLink string, slug []byte) []byte {
|
||||
const format = ` <a class="footnote-return" href="#fnref:%s%s">%s</a>`
|
||||
return []byte(fmt.Sprintf(format, prefix, slug, returnLink))
|
||||
}
|
||||
|
||||
func itemOpenCR(node *Node) bool {
|
||||
if node.Prev == nil {
|
||||
return false
|
||||
}
|
||||
ld := node.Parent.ListData
|
||||
return !ld.Tight && ld.ListFlags&ListTypeDefinition == 0
|
||||
}
|
||||
|
||||
func skipParagraphTags(node *Node) bool {
|
||||
grandparent := node.Parent.Parent
|
||||
if grandparent == nil || grandparent.Type != List {
|
||||
return false
|
||||
}
|
||||
tightOrTerm := grandparent.Tight || node.Parent.ListFlags&ListTypeTerm != 0
|
||||
return grandparent.Type == List && tightOrTerm
|
||||
}
|
||||
|
||||
func cellAlignment(align CellAlignFlags) string {
|
||||
switch align {
|
||||
case TableAlignmentLeft:
|
||||
return "left"
|
||||
case TableAlignmentRight:
|
||||
return "right"
|
||||
case TableAlignmentCenter:
|
||||
return "center"
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
|
||||
func (r *HTMLRenderer) out(w io.Writer, text []byte) {
|
||||
if r.disableTags > 0 {
|
||||
w.Write(htmlTagRe.ReplaceAll(text, []byte{}))
|
||||
} else {
|
||||
w.Write(text)
|
||||
}
|
||||
r.lastOutputLen = len(text)
|
||||
}
|
||||
|
||||
func (r *HTMLRenderer) cr(w io.Writer) {
|
||||
if r.lastOutputLen > 0 {
|
||||
r.out(w, nlBytes)
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
nlBytes = []byte{'\n'}
|
||||
gtBytes = []byte{'>'}
|
||||
spaceBytes = []byte{' '}
|
||||
)
|
||||
|
||||
var (
|
||||
brTag = []byte("<br>")
|
||||
brXHTMLTag = []byte("<br />")
|
||||
emTag = []byte("<em>")
|
||||
emCloseTag = []byte("</em>")
|
||||
strongTag = []byte("<strong>")
|
||||
strongCloseTag = []byte("</strong>")
|
||||
delTag = []byte("<del>")
|
||||
delCloseTag = []byte("</del>")
|
||||
ttTag = []byte("<tt>")
|
||||
ttCloseTag = []byte("</tt>")
|
||||
aTag = []byte("<a")
|
||||
aCloseTag = []byte("</a>")
|
||||
preTag = []byte("<pre>")
|
||||
preCloseTag = []byte("</pre>")
|
||||
codeTag = []byte("<code>")
|
||||
codeCloseTag = []byte("</code>")
|
||||
pTag = []byte("<p>")
|
||||
pCloseTag = []byte("</p>")
|
||||
blockquoteTag = []byte("<blockquote>")
|
||||
blockquoteCloseTag = []byte("</blockquote>")
|
||||
hrTag = []byte("<hr>")
|
||||
hrXHTMLTag = []byte("<hr />")
|
||||
ulTag = []byte("<ul>")
|
||||
ulCloseTag = []byte("</ul>")
|
||||
olTag = []byte("<ol>")
|
||||
olCloseTag = []byte("</ol>")
|
||||
dlTag = []byte("<dl>")
|
||||
dlCloseTag = []byte("</dl>")
|
||||
liTag = []byte("<li>")
|
||||
liCloseTag = []byte("</li>")
|
||||
ddTag = []byte("<dd>")
|
||||
ddCloseTag = []byte("</dd>")
|
||||
dtTag = []byte("<dt>")
|
||||
dtCloseTag = []byte("</dt>")
|
||||
tableTag = []byte("<table>")
|
||||
tableCloseTag = []byte("</table>")
|
||||
tdTag = []byte("<td")
|
||||
tdCloseTag = []byte("</td>")
|
||||
thTag = []byte("<th")
|
||||
thCloseTag = []byte("</th>")
|
||||
theadTag = []byte("<thead>")
|
||||
theadCloseTag = []byte("</thead>")
|
||||
tbodyTag = []byte("<tbody>")
|
||||
tbodyCloseTag = []byte("</tbody>")
|
||||
trTag = []byte("<tr>")
|
||||
trCloseTag = []byte("</tr>")
|
||||
h1Tag = []byte("<h1")
|
||||
h1CloseTag = []byte("</h1>")
|
||||
h2Tag = []byte("<h2")
|
||||
h2CloseTag = []byte("</h2>")
|
||||
h3Tag = []byte("<h3")
|
||||
h3CloseTag = []byte("</h3>")
|
||||
h4Tag = []byte("<h4")
|
||||
h4CloseTag = []byte("</h4>")
|
||||
h5Tag = []byte("<h5")
|
||||
h5CloseTag = []byte("</h5>")
|
||||
h6Tag = []byte("<h6")
|
||||
h6CloseTag = []byte("</h6>")
|
||||
|
||||
footnotesDivBytes = []byte("\n<div class=\"footnotes\">\n\n")
|
||||
footnotesCloseDivBytes = []byte("\n</div>\n")
|
||||
)
|
||||
|
||||
func headingTagsFromLevel(level int) ([]byte, []byte) {
|
||||
if level <= 1 {
|
||||
return h1Tag, h1CloseTag
|
||||
}
|
||||
switch level {
|
||||
case 2:
|
||||
return h2Tag, h2CloseTag
|
||||
case 3:
|
||||
return h3Tag, h3CloseTag
|
||||
case 4:
|
||||
return h4Tag, h4CloseTag
|
||||
case 5:
|
||||
return h5Tag, h5CloseTag
|
||||
}
|
||||
return h6Tag, h6CloseTag
|
||||
}
|
||||
|
||||
func (r *HTMLRenderer) outHRTag(w io.Writer) {
|
||||
if r.Flags&UseXHTML == 0 {
|
||||
r.out(w, hrTag)
|
||||
} else {
|
||||
r.out(w, hrXHTMLTag)
|
||||
}
|
||||
}
|
||||
|
||||
// RenderNode is a default renderer of a single node of a syntax tree. For
|
||||
// block nodes it will be called twice: first time with entering=true, second
|
||||
// time with entering=false, so that it could know when it's working on an open
|
||||
// tag and when on close. It writes the result to w.
|
||||
//
|
||||
// The return value is a way to tell the calling walker to adjust its walk
|
||||
// pattern: e.g. it can terminate the traversal by returning Terminate. Or it
|
||||
// can ask the walker to skip a subtree of this node by returning SkipChildren.
|
||||
// The typical behavior is to return GoToNext, which asks for the usual
|
||||
// traversal to the next node.
|
||||
func (r *HTMLRenderer) RenderNode(w io.Writer, node *Node, entering bool) WalkStatus {
|
||||
attrs := []string{}
|
||||
switch node.Type {
|
||||
case Text:
|
||||
if r.Flags&Smartypants != 0 {
|
||||
var tmp bytes.Buffer
|
||||
escapeHTML(&tmp, node.Literal)
|
||||
r.sr.Process(w, tmp.Bytes())
|
||||
} else {
|
||||
if node.Parent.Type == Link {
|
||||
escLink(w, node.Literal)
|
||||
} else {
|
||||
escapeHTML(w, node.Literal)
|
||||
}
|
||||
}
|
||||
case Softbreak:
|
||||
r.cr(w)
|
||||
// TODO: make it configurable via out(renderer.softbreak)
|
||||
case Hardbreak:
|
||||
if r.Flags&UseXHTML == 0 {
|
||||
r.out(w, brTag)
|
||||
} else {
|
||||
r.out(w, brXHTMLTag)
|
||||
}
|
||||
r.cr(w)
|
||||
case Emph:
|
||||
if entering {
|
||||
r.out(w, emTag)
|
||||
} else {
|
||||
r.out(w, emCloseTag)
|
||||
}
|
||||
case Strong:
|
||||
if entering {
|
||||
r.out(w, strongTag)
|
||||
} else {
|
||||
r.out(w, strongCloseTag)
|
||||
}
|
||||
case Del:
|
||||
if entering {
|
||||
r.out(w, delTag)
|
||||
} else {
|
||||
r.out(w, delCloseTag)
|
||||
}
|
||||
case HTMLSpan:
|
||||
if r.Flags&SkipHTML != 0 {
|
||||
break
|
||||
}
|
||||
r.out(w, node.Literal)
|
||||
case Link:
|
||||
// mark it but don't link it if it is not a safe link: no smartypants
|
||||
dest := node.LinkData.Destination
|
||||
if needSkipLink(r.Flags, dest) {
|
||||
if entering {
|
||||
r.out(w, ttTag)
|
||||
} else {
|
||||
r.out(w, ttCloseTag)
|
||||
}
|
||||
} else {
|
||||
if entering {
|
||||
dest = r.addAbsPrefix(dest)
|
||||
var hrefBuf bytes.Buffer
|
||||
hrefBuf.WriteString("href=\"")
|
||||
escLink(&hrefBuf, dest)
|
||||
hrefBuf.WriteByte('"')
|
||||
attrs = append(attrs, hrefBuf.String())
|
||||
if node.NoteID != 0 {
|
||||
r.out(w, footnoteRef(r.FootnoteAnchorPrefix, node))
|
||||
break
|
||||
}
|
||||
attrs = appendLinkAttrs(attrs, r.Flags, dest)
|
||||
if len(node.LinkData.Title) > 0 {
|
||||
var titleBuff bytes.Buffer
|
||||
titleBuff.WriteString("title=\"")
|
||||
escapeHTML(&titleBuff, node.LinkData.Title)
|
||||
titleBuff.WriteByte('"')
|
||||
attrs = append(attrs, titleBuff.String())
|
||||
}
|
||||
r.tag(w, aTag, attrs)
|
||||
} else {
|
||||
if node.NoteID != 0 {
|
||||
break
|
||||
}
|
||||
r.out(w, aCloseTag)
|
||||
}
|
||||
}
|
||||
case Image:
|
||||
if r.Flags&SkipImages != 0 {
|
||||
return SkipChildren
|
||||
}
|
||||
if entering {
|
||||
dest := node.LinkData.Destination
|
||||
dest = r.addAbsPrefix(dest)
|
||||
if r.disableTags == 0 {
|
||||
//if options.safe && potentiallyUnsafe(dest) {
|
||||
//out(w, `<img src="" alt="`)
|
||||
//} else {
|
||||
r.out(w, []byte(`<img src="`))
|
||||
escLink(w, dest)
|
||||
r.out(w, []byte(`" alt="`))
|
||||
//}
|
||||
}
|
||||
r.disableTags++
|
||||
} else {
|
||||
r.disableTags--
|
||||
if r.disableTags == 0 {
|
||||
if node.LinkData.Title != nil {
|
||||
r.out(w, []byte(`" title="`))
|
||||
escapeHTML(w, node.LinkData.Title)
|
||||
}
|
||||
r.out(w, []byte(`" />`))
|
||||
}
|
||||
}
|
||||
case Code:
|
||||
r.out(w, codeTag)
|
||||
escapeAllHTML(w, node.Literal)
|
||||
r.out(w, codeCloseTag)
|
||||
case Document:
|
||||
break
|
||||
case Paragraph:
|
||||
if skipParagraphTags(node) {
|
||||
break
|
||||
}
|
||||
if entering {
|
||||
// TODO: untangle this clusterfuck about when the newlines need
|
||||
// to be added and when not.
|
||||
if node.Prev != nil {
|
||||
switch node.Prev.Type {
|
||||
case HTMLBlock, List, Paragraph, Heading, CodeBlock, BlockQuote, HorizontalRule:
|
||||
r.cr(w)
|
||||
}
|
||||
}
|
||||
if node.Parent.Type == BlockQuote && node.Prev == nil {
|
||||
r.cr(w)
|
||||
}
|
||||
r.out(w, pTag)
|
||||
} else {
|
||||
r.out(w, pCloseTag)
|
||||
if !(node.Parent.Type == Item && node.Next == nil) {
|
||||
r.cr(w)
|
||||
}
|
||||
}
|
||||
case BlockQuote:
|
||||
if entering {
|
||||
r.cr(w)
|
||||
r.out(w, blockquoteTag)
|
||||
} else {
|
||||
r.out(w, blockquoteCloseTag)
|
||||
r.cr(w)
|
||||
}
|
||||
case HTMLBlock:
|
||||
if r.Flags&SkipHTML != 0 {
|
||||
break
|
||||
}
|
||||
r.cr(w)
|
||||
r.out(w, node.Literal)
|
||||
r.cr(w)
|
||||
case Heading:
|
||||
headingLevel := r.HTMLRendererParameters.HeadingLevelOffset + node.Level
|
||||
openTag, closeTag := headingTagsFromLevel(headingLevel)
|
||||
if entering {
|
||||
if node.IsTitleblock {
|
||||
attrs = append(attrs, `class="title"`)
|
||||
}
|
||||
if node.HeadingID != "" {
|
||||
id := r.ensureUniqueHeadingID(node.HeadingID)
|
||||
if r.HeadingIDPrefix != "" {
|
||||
id = r.HeadingIDPrefix + id
|
||||
}
|
||||
if r.HeadingIDSuffix != "" {
|
||||
id = id + r.HeadingIDSuffix
|
||||
}
|
||||
attrs = append(attrs, fmt.Sprintf(`id="%s"`, id))
|
||||
}
|
||||
r.cr(w)
|
||||
r.tag(w, openTag, attrs)
|
||||
} else {
|
||||
r.out(w, closeTag)
|
||||
if !(node.Parent.Type == Item && node.Next == nil) {
|
||||
r.cr(w)
|
||||
}
|
||||
}
|
||||
case HorizontalRule:
|
||||
r.cr(w)
|
||||
r.outHRTag(w)
|
||||
r.cr(w)
|
||||
case List:
|
||||
openTag := ulTag
|
||||
closeTag := ulCloseTag
|
||||
if node.ListFlags&ListTypeOrdered != 0 {
|
||||
openTag = olTag
|
||||
closeTag = olCloseTag
|
||||
}
|
||||
if node.ListFlags&ListTypeDefinition != 0 {
|
||||
openTag = dlTag
|
||||
closeTag = dlCloseTag
|
||||
}
|
||||
if entering {
|
||||
if node.IsFootnotesList {
|
||||
r.out(w, footnotesDivBytes)
|
||||
r.outHRTag(w)
|
||||
r.cr(w)
|
||||
}
|
||||
r.cr(w)
|
||||
if node.Parent.Type == Item && node.Parent.Parent.Tight {
|
||||
r.cr(w)
|
||||
}
|
||||
r.tag(w, openTag[:len(openTag)-1], attrs)
|
||||
r.cr(w)
|
||||
} else {
|
||||
r.out(w, closeTag)
|
||||
//cr(w)
|
||||
//if node.parent.Type != Item {
|
||||
// cr(w)
|
||||
//}
|
||||
if node.Parent.Type == Item && node.Next != nil {
|
||||
r.cr(w)
|
||||
}
|
||||
if node.Parent.Type == Document || node.Parent.Type == BlockQuote {
|
||||
r.cr(w)
|
||||
}
|
||||
if node.IsFootnotesList {
|
||||
r.out(w, footnotesCloseDivBytes)
|
||||
}
|
||||
}
|
||||
case Item:
|
||||
openTag := liTag
|
||||
closeTag := liCloseTag
|
||||
if node.ListFlags&ListTypeDefinition != 0 {
|
||||
openTag = ddTag
|
||||
closeTag = ddCloseTag
|
||||
}
|
||||
if node.ListFlags&ListTypeTerm != 0 {
|
||||
openTag = dtTag
|
||||
closeTag = dtCloseTag
|
||||
}
|
||||
if entering {
|
||||
if itemOpenCR(node) {
|
||||
r.cr(w)
|
||||
}
|
||||
if node.ListData.RefLink != nil {
|
||||
slug := slugify(node.ListData.RefLink)
|
||||
r.out(w, footnoteItem(r.FootnoteAnchorPrefix, slug))
|
||||
break
|
||||
}
|
||||
r.out(w, openTag)
|
||||
} else {
|
||||
if node.ListData.RefLink != nil {
|
||||
slug := slugify(node.ListData.RefLink)
|
||||
if r.Flags&FootnoteReturnLinks != 0 {
|
||||
r.out(w, footnoteReturnLink(r.FootnoteAnchorPrefix, r.FootnoteReturnLinkContents, slug))
|
||||
}
|
||||
}
|
||||
r.out(w, closeTag)
|
||||
r.cr(w)
|
||||
}
|
||||
case CodeBlock:
|
||||
attrs = appendLanguageAttr(attrs, node.Info)
|
||||
r.cr(w)
|
||||
r.out(w, preTag)
|
||||
r.tag(w, codeTag[:len(codeTag)-1], attrs)
|
||||
escapeAllHTML(w, node.Literal)
|
||||
r.out(w, codeCloseTag)
|
||||
r.out(w, preCloseTag)
|
||||
if node.Parent.Type != Item {
|
||||
r.cr(w)
|
||||
}
|
||||
case Table:
|
||||
if entering {
|
||||
r.cr(w)
|
||||
r.out(w, tableTag)
|
||||
} else {
|
||||
r.out(w, tableCloseTag)
|
||||
r.cr(w)
|
||||
}
|
||||
case TableCell:
|
||||
openTag := tdTag
|
||||
closeTag := tdCloseTag
|
||||
if node.IsHeader {
|
||||
openTag = thTag
|
||||
closeTag = thCloseTag
|
||||
}
|
||||
if entering {
|
||||
align := cellAlignment(node.Align)
|
||||
if align != "" {
|
||||
attrs = append(attrs, fmt.Sprintf(`align="%s"`, align))
|
||||
}
|
||||
if node.Prev == nil {
|
||||
r.cr(w)
|
||||
}
|
||||
r.tag(w, openTag, attrs)
|
||||
} else {
|
||||
r.out(w, closeTag)
|
||||
r.cr(w)
|
||||
}
|
||||
case TableHead:
|
||||
if entering {
|
||||
r.cr(w)
|
||||
r.out(w, theadTag)
|
||||
} else {
|
||||
r.out(w, theadCloseTag)
|
||||
r.cr(w)
|
||||
}
|
||||
case TableBody:
|
||||
if entering {
|
||||
r.cr(w)
|
||||
r.out(w, tbodyTag)
|
||||
// XXX: this is to adhere to a rather silly test. Should fix test.
|
||||
if node.FirstChild == nil {
|
||||
r.cr(w)
|
||||
}
|
||||
} else {
|
||||
r.out(w, tbodyCloseTag)
|
||||
r.cr(w)
|
||||
}
|
||||
case TableRow:
|
||||
if entering {
|
||||
r.cr(w)
|
||||
r.out(w, trTag)
|
||||
} else {
|
||||
r.out(w, trCloseTag)
|
||||
r.cr(w)
|
||||
}
|
||||
default:
|
||||
panic("Unknown node type " + node.Type.String())
|
||||
}
|
||||
return GoToNext
|
||||
}
|
||||
|
||||
// RenderHeader writes HTML document preamble and TOC if requested.
|
||||
func (r *HTMLRenderer) RenderHeader(w io.Writer, ast *Node) {
|
||||
r.writeDocumentHeader(w)
|
||||
if r.Flags&TOC != 0 {
|
||||
r.writeTOC(w, ast)
|
||||
}
|
||||
}
|
||||
|
||||
// RenderFooter writes HTML document footer.
|
||||
func (r *HTMLRenderer) RenderFooter(w io.Writer, ast *Node) {
|
||||
if r.Flags&CompletePage == 0 {
|
||||
return
|
||||
}
|
||||
io.WriteString(w, "\n</body>\n</html>\n")
|
||||
}
|
||||
|
||||
func (r *HTMLRenderer) writeDocumentHeader(w io.Writer) {
|
||||
if r.Flags&CompletePage == 0 {
|
||||
return
|
||||
}
|
||||
ending := ""
|
||||
if r.Flags&UseXHTML != 0 {
|
||||
io.WriteString(w, "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" ")
|
||||
io.WriteString(w, "\"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n")
|
||||
io.WriteString(w, "<html xmlns=\"http://www.w3.org/1999/xhtml\">\n")
|
||||
ending = " /"
|
||||
} else {
|
||||
io.WriteString(w, "<!DOCTYPE html>\n")
|
||||
io.WriteString(w, "<html>\n")
|
||||
}
|
||||
io.WriteString(w, "<head>\n")
|
||||
io.WriteString(w, " <title>")
|
||||
if r.Flags&Smartypants != 0 {
|
||||
r.sr.Process(w, []byte(r.Title))
|
||||
} else {
|
||||
escapeHTML(w, []byte(r.Title))
|
||||
}
|
||||
io.WriteString(w, "</title>\n")
|
||||
io.WriteString(w, " <meta name=\"GENERATOR\" content=\"Blackfriday Markdown Processor v")
|
||||
io.WriteString(w, Version)
|
||||
io.WriteString(w, "\"")
|
||||
io.WriteString(w, ending)
|
||||
io.WriteString(w, ">\n")
|
||||
io.WriteString(w, " <meta charset=\"utf-8\"")
|
||||
io.WriteString(w, ending)
|
||||
io.WriteString(w, ">\n")
|
||||
if r.CSS != "" {
|
||||
io.WriteString(w, " <link rel=\"stylesheet\" type=\"text/css\" href=\"")
|
||||
escapeHTML(w, []byte(r.CSS))
|
||||
io.WriteString(w, "\"")
|
||||
io.WriteString(w, ending)
|
||||
io.WriteString(w, ">\n")
|
||||
}
|
||||
if r.Icon != "" {
|
||||
io.WriteString(w, " <link rel=\"icon\" type=\"image/x-icon\" href=\"")
|
||||
escapeHTML(w, []byte(r.Icon))
|
||||
io.WriteString(w, "\"")
|
||||
io.WriteString(w, ending)
|
||||
io.WriteString(w, ">\n")
|
||||
}
|
||||
io.WriteString(w, "</head>\n")
|
||||
io.WriteString(w, "<body>\n\n")
|
||||
}
|
||||
|
||||
func (r *HTMLRenderer) writeTOC(w io.Writer, ast *Node) {
|
||||
buf := bytes.Buffer{}
|
||||
|
||||
inHeading := false
|
||||
tocLevel := 0
|
||||
headingCount := 0
|
||||
|
||||
ast.Walk(func(node *Node, entering bool) WalkStatus {
|
||||
if node.Type == Heading && !node.HeadingData.IsTitleblock {
|
||||
inHeading = entering
|
||||
if entering {
|
||||
node.HeadingID = fmt.Sprintf("toc_%d", headingCount)
|
||||
if node.Level == tocLevel {
|
||||
buf.WriteString("</li>\n\n<li>")
|
||||
} else if node.Level < tocLevel {
|
||||
for node.Level < tocLevel {
|
||||
tocLevel--
|
||||
buf.WriteString("</li>\n</ul>")
|
||||
}
|
||||
buf.WriteString("</li>\n\n<li>")
|
||||
} else {
|
||||
for node.Level > tocLevel {
|
||||
tocLevel++
|
||||
buf.WriteString("\n<ul>\n<li>")
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Fprintf(&buf, `<a href="#toc_%d">`, headingCount)
|
||||
headingCount++
|
||||
} else {
|
||||
buf.WriteString("</a>")
|
||||
}
|
||||
return GoToNext
|
||||
}
|
||||
|
||||
if inHeading {
|
||||
return r.RenderNode(&buf, node, entering)
|
||||
}
|
||||
|
||||
return GoToNext
|
||||
})
|
||||
|
||||
for ; tocLevel > 0; tocLevel-- {
|
||||
buf.WriteString("</li>\n</ul>")
|
||||
}
|
||||
|
||||
if buf.Len() > 0 {
|
||||
io.WriteString(w, "<nav>\n")
|
||||
w.Write(buf.Bytes())
|
||||
io.WriteString(w, "\n\n</nav>\n")
|
||||
}
|
||||
r.lastOutputLen = buf.Len()
|
||||
}
|
1228
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/inline.go
generated
vendored
Normal file
1228
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/inline.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
950
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/markdown.go
generated
vendored
Normal file
950
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/markdown.go
generated
vendored
Normal file
@ -0,0 +1,950 @@
|
||||
// Blackfriday Markdown Processor
|
||||
// Available at http://github.com/russross/blackfriday
|
||||
//
|
||||
// Copyright © 2011 Russ Ross <russ@russross.com>.
|
||||
// Distributed under the Simplified BSD License.
|
||||
// See README.md for details.
|
||||
|
||||
package blackfriday
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
//
|
||||
// Markdown parsing and processing
|
||||
//
|
||||
|
||||
// Version string of the package. Appears in the rendered document when
|
||||
// CompletePage flag is on.
|
||||
const Version = "2.0"
|
||||
|
||||
// Extensions is a bitwise or'ed collection of enabled Blackfriday's
|
||||
// extensions.
|
||||
type Extensions int
|
||||
|
||||
// These are the supported markdown parsing extensions.
|
||||
// OR these values together to select multiple extensions.
|
||||
const (
|
||||
NoExtensions Extensions = 0
|
||||
NoIntraEmphasis Extensions = 1 << iota // Ignore emphasis markers inside words
|
||||
Tables // Render tables
|
||||
FencedCode // Render fenced code blocks
|
||||
Autolink // Detect embedded URLs that are not explicitly marked
|
||||
Strikethrough // Strikethrough text using ~~test~~
|
||||
LaxHTMLBlocks // Loosen up HTML block parsing rules
|
||||
SpaceHeadings // Be strict about prefix heading rules
|
||||
HardLineBreak // Translate newlines into line breaks
|
||||
TabSizeEight // Expand tabs to eight spaces instead of four
|
||||
Footnotes // Pandoc-style footnotes
|
||||
NoEmptyLineBeforeBlock // No need to insert an empty line to start a (code, quote, ordered list, unordered list) block
|
||||
HeadingIDs // specify heading IDs with {#id}
|
||||
Titleblock // Titleblock ala pandoc
|
||||
AutoHeadingIDs // Create the heading ID from the text
|
||||
BackslashLineBreak // Translate trailing backslashes into line breaks
|
||||
DefinitionLists // Render definition lists
|
||||
|
||||
CommonHTMLFlags HTMLFlags = UseXHTML | Smartypants |
|
||||
SmartypantsFractions | SmartypantsDashes | SmartypantsLatexDashes
|
||||
|
||||
CommonExtensions Extensions = NoIntraEmphasis | Tables | FencedCode |
|
||||
Autolink | Strikethrough | SpaceHeadings | HeadingIDs |
|
||||
BackslashLineBreak | DefinitionLists
|
||||
)
|
||||
|
||||
// ListType contains bitwise or'ed flags for list and list item objects.
|
||||
type ListType int
|
||||
|
||||
// These are the possible flag values for the ListItem renderer.
|
||||
// Multiple flag values may be ORed together.
|
||||
// These are mostly of interest if you are writing a new output format.
|
||||
const (
|
||||
ListTypeOrdered ListType = 1 << iota
|
||||
ListTypeDefinition
|
||||
ListTypeTerm
|
||||
|
||||
ListItemContainsBlock
|
||||
ListItemBeginningOfList // TODO: figure out if this is of any use now
|
||||
ListItemEndOfList
|
||||
)
|
||||
|
||||
// CellAlignFlags holds a type of alignment in a table cell.
|
||||
type CellAlignFlags int
|
||||
|
||||
// These are the possible flag values for the table cell renderer.
|
||||
// Only a single one of these values will be used; they are not ORed together.
|
||||
// These are mostly of interest if you are writing a new output format.
|
||||
const (
|
||||
TableAlignmentLeft CellAlignFlags = 1 << iota
|
||||
TableAlignmentRight
|
||||
TableAlignmentCenter = (TableAlignmentLeft | TableAlignmentRight)
|
||||
)
|
||||
|
||||
// The size of a tab stop.
|
||||
const (
|
||||
TabSizeDefault = 4
|
||||
TabSizeDouble = 8
|
||||
)
|
||||
|
||||
// blockTags is a set of tags that are recognized as HTML block tags.
|
||||
// Any of these can be included in markdown text without special escaping.
|
||||
var blockTags = map[string]struct{}{
|
||||
"blockquote": {},
|
||||
"del": {},
|
||||
"div": {},
|
||||
"dl": {},
|
||||
"fieldset": {},
|
||||
"form": {},
|
||||
"h1": {},
|
||||
"h2": {},
|
||||
"h3": {},
|
||||
"h4": {},
|
||||
"h5": {},
|
||||
"h6": {},
|
||||
"iframe": {},
|
||||
"ins": {},
|
||||
"math": {},
|
||||
"noscript": {},
|
||||
"ol": {},
|
||||
"pre": {},
|
||||
"p": {},
|
||||
"script": {},
|
||||
"style": {},
|
||||
"table": {},
|
||||
"ul": {},
|
||||
|
||||
// HTML5
|
||||
"address": {},
|
||||
"article": {},
|
||||
"aside": {},
|
||||
"canvas": {},
|
||||
"figcaption": {},
|
||||
"figure": {},
|
||||
"footer": {},
|
||||
"header": {},
|
||||
"hgroup": {},
|
||||
"main": {},
|
||||
"nav": {},
|
||||
"output": {},
|
||||
"progress": {},
|
||||
"section": {},
|
||||
"video": {},
|
||||
}
|
||||
|
||||
// Renderer is the rendering interface. This is mostly of interest if you are
|
||||
// implementing a new rendering format.
|
||||
//
|
||||
// Only an HTML implementation is provided in this repository, see the README
|
||||
// for external implementations.
|
||||
type Renderer interface {
|
||||
// RenderNode is the main rendering method. It will be called once for
|
||||
// every leaf node and twice for every non-leaf node (first with
|
||||
// entering=true, then with entering=false). The method should write its
|
||||
// rendition of the node to the supplied writer w.
|
||||
RenderNode(w io.Writer, node *Node, entering bool) WalkStatus
|
||||
|
||||
// RenderHeader is a method that allows the renderer to produce some
|
||||
// content preceding the main body of the output document. The header is
|
||||
// understood in the broad sense here. For example, the default HTML
|
||||
// renderer will write not only the HTML document preamble, but also the
|
||||
// table of contents if it was requested.
|
||||
//
|
||||
// The method will be passed an entire document tree, in case a particular
|
||||
// implementation needs to inspect it to produce output.
|
||||
//
|
||||
// The output should be written to the supplied writer w. If your
|
||||
// implementation has no header to write, supply an empty implementation.
|
||||
RenderHeader(w io.Writer, ast *Node)
|
||||
|
||||
// RenderFooter is a symmetric counterpart of RenderHeader.
|
||||
RenderFooter(w io.Writer, ast *Node)
|
||||
}
|
||||
|
||||
// Callback functions for inline parsing. One such function is defined
|
||||
// for each character that triggers a response when parsing inline data.
|
||||
type inlineParser func(p *Markdown, data []byte, offset int) (int, *Node)
|
||||
|
||||
// Markdown is a type that holds extensions and the runtime state used by
|
||||
// Parse, and the renderer. You can not use it directly, construct it with New.
|
||||
type Markdown struct {
|
||||
renderer Renderer
|
||||
referenceOverride ReferenceOverrideFunc
|
||||
refs map[string]*reference
|
||||
inlineCallback [256]inlineParser
|
||||
extensions Extensions
|
||||
nesting int
|
||||
maxNesting int
|
||||
insideLink bool
|
||||
|
||||
// Footnotes need to be ordered as well as available to quickly check for
|
||||
// presence. If a ref is also a footnote, it's stored both in refs and here
|
||||
// in notes. Slice is nil if footnotes not enabled.
|
||||
notes []*reference
|
||||
|
||||
doc *Node
|
||||
tip *Node // = doc
|
||||
oldTip *Node
|
||||
lastMatchedContainer *Node // = doc
|
||||
allClosed bool
|
||||
}
|
||||
|
||||
func (p *Markdown) getRef(refid string) (ref *reference, found bool) {
|
||||
if p.referenceOverride != nil {
|
||||
r, overridden := p.referenceOverride(refid)
|
||||
if overridden {
|
||||
if r == nil {
|
||||
return nil, false
|
||||
}
|
||||
return &reference{
|
||||
link: []byte(r.Link),
|
||||
title: []byte(r.Title),
|
||||
noteID: 0,
|
||||
hasBlock: false,
|
||||
text: []byte(r.Text)}, true
|
||||
}
|
||||
}
|
||||
// refs are case insensitive
|
||||
ref, found = p.refs[strings.ToLower(refid)]
|
||||
return ref, found
|
||||
}
|
||||
|
||||
func (p *Markdown) finalize(block *Node) {
|
||||
above := block.Parent
|
||||
block.open = false
|
||||
p.tip = above
|
||||
}
|
||||
|
||||
func (p *Markdown) addChild(node NodeType, offset uint32) *Node {
|
||||
return p.addExistingChild(NewNode(node), offset)
|
||||
}
|
||||
|
||||
func (p *Markdown) addExistingChild(node *Node, offset uint32) *Node {
|
||||
for !p.tip.canContain(node.Type) {
|
||||
p.finalize(p.tip)
|
||||
}
|
||||
p.tip.AppendChild(node)
|
||||
p.tip = node
|
||||
return node
|
||||
}
|
||||
|
||||
func (p *Markdown) closeUnmatchedBlocks() {
|
||||
if !p.allClosed {
|
||||
for p.oldTip != p.lastMatchedContainer {
|
||||
parent := p.oldTip.Parent
|
||||
p.finalize(p.oldTip)
|
||||
p.oldTip = parent
|
||||
}
|
||||
p.allClosed = true
|
||||
}
|
||||
}
|
||||
|
||||
//
|
||||
//
|
||||
// Public interface
|
||||
//
|
||||
//
|
||||
|
||||
// Reference represents the details of a link.
|
||||
// See the documentation in Options for more details on use-case.
|
||||
type Reference struct {
|
||||
// Link is usually the URL the reference points to.
|
||||
Link string
|
||||
// Title is the alternate text describing the link in more detail.
|
||||
Title string
|
||||
// Text is the optional text to override the ref with if the syntax used was
|
||||
// [refid][]
|
||||
Text string
|
||||
}
|
||||
|
||||
// ReferenceOverrideFunc is expected to be called with a reference string and
|
||||
// return either a valid Reference type that the reference string maps to or
|
||||
// nil. If overridden is false, the default reference logic will be executed.
|
||||
// See the documentation in Options for more details on use-case.
|
||||
type ReferenceOverrideFunc func(reference string) (ref *Reference, overridden bool)
|
||||
|
||||
// New constructs a Markdown processor. You can use the same With* functions as
|
||||
// for Run() to customize parser's behavior and the renderer.
|
||||
func New(opts ...Option) *Markdown {
|
||||
var p Markdown
|
||||
for _, opt := range opts {
|
||||
opt(&p)
|
||||
}
|
||||
p.refs = make(map[string]*reference)
|
||||
p.maxNesting = 16
|
||||
p.insideLink = false
|
||||
docNode := NewNode(Document)
|
||||
p.doc = docNode
|
||||
p.tip = docNode
|
||||
p.oldTip = docNode
|
||||
p.lastMatchedContainer = docNode
|
||||
p.allClosed = true
|
||||
// register inline parsers
|
||||
p.inlineCallback[' '] = maybeLineBreak
|
||||
p.inlineCallback['*'] = emphasis
|
||||
p.inlineCallback['_'] = emphasis
|
||||
if p.extensions&Strikethrough != 0 {
|
||||
p.inlineCallback['~'] = emphasis
|
||||
}
|
||||
p.inlineCallback['`'] = codeSpan
|
||||
p.inlineCallback['\n'] = lineBreak
|
||||
p.inlineCallback['['] = link
|
||||
p.inlineCallback['<'] = leftAngle
|
||||
p.inlineCallback['\\'] = escape
|
||||
p.inlineCallback['&'] = entity
|
||||
p.inlineCallback['!'] = maybeImage
|
||||
p.inlineCallback['^'] = maybeInlineFootnote
|
||||
if p.extensions&Autolink != 0 {
|
||||
p.inlineCallback['h'] = maybeAutoLink
|
||||
p.inlineCallback['m'] = maybeAutoLink
|
||||
p.inlineCallback['f'] = maybeAutoLink
|
||||
p.inlineCallback['H'] = maybeAutoLink
|
||||
p.inlineCallback['M'] = maybeAutoLink
|
||||
p.inlineCallback['F'] = maybeAutoLink
|
||||
}
|
||||
if p.extensions&Footnotes != 0 {
|
||||
p.notes = make([]*reference, 0)
|
||||
}
|
||||
return &p
|
||||
}
|
||||
|
||||
// Option customizes the Markdown processor's default behavior.
|
||||
type Option func(*Markdown)
|
||||
|
||||
// WithRenderer allows you to override the default renderer.
|
||||
func WithRenderer(r Renderer) Option {
|
||||
return func(p *Markdown) {
|
||||
p.renderer = r
|
||||
}
|
||||
}
|
||||
|
||||
// WithExtensions allows you to pick some of the many extensions provided by
|
||||
// Blackfriday. You can bitwise OR them.
|
||||
func WithExtensions(e Extensions) Option {
|
||||
return func(p *Markdown) {
|
||||
p.extensions = e
|
||||
}
|
||||
}
|
||||
|
||||
// WithNoExtensions turns off all extensions and custom behavior.
|
||||
func WithNoExtensions() Option {
|
||||
return func(p *Markdown) {
|
||||
p.extensions = NoExtensions
|
||||
p.renderer = NewHTMLRenderer(HTMLRendererParameters{
|
||||
Flags: HTMLFlagsNone,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// WithRefOverride sets an optional function callback that is called every
|
||||
// time a reference is resolved.
|
||||
//
|
||||
// In Markdown, the link reference syntax can be made to resolve a link to
|
||||
// a reference instead of an inline URL, in one of the following ways:
|
||||
//
|
||||
// * [link text][refid]
|
||||
// * [refid][]
|
||||
//
|
||||
// Usually, the refid is defined at the bottom of the Markdown document. If
|
||||
// this override function is provided, the refid is passed to the override
|
||||
// function first, before consulting the defined refids at the bottom. If
|
||||
// the override function indicates an override did not occur, the refids at
|
||||
// the bottom will be used to fill in the link details.
|
||||
func WithRefOverride(o ReferenceOverrideFunc) Option {
|
||||
return func(p *Markdown) {
|
||||
p.referenceOverride = o
|
||||
}
|
||||
}
|
||||
|
||||
// Run is the main entry point to Blackfriday. It parses and renders a
|
||||
// block of markdown-encoded text.
|
||||
//
|
||||
// The simplest invocation of Run takes one argument, input:
|
||||
// output := Run(input)
|
||||
// This will parse the input with CommonExtensions enabled and render it with
|
||||
// the default HTMLRenderer (with CommonHTMLFlags).
|
||||
//
|
||||
// Variadic arguments opts can customize the default behavior. Since Markdown
|
||||
// type does not contain exported fields, you can not use it directly. Instead,
|
||||
// use the With* functions. For example, this will call the most basic
|
||||
// functionality, with no extensions:
|
||||
// output := Run(input, WithNoExtensions())
|
||||
//
|
||||
// You can use any number of With* arguments, even contradicting ones. They
|
||||
// will be applied in order of appearance and the latter will override the
|
||||
// former:
|
||||
// output := Run(input, WithNoExtensions(), WithExtensions(exts),
|
||||
// WithRenderer(yourRenderer))
|
||||
func Run(input []byte, opts ...Option) []byte {
|
||||
r := NewHTMLRenderer(HTMLRendererParameters{
|
||||
Flags: CommonHTMLFlags,
|
||||
})
|
||||
optList := []Option{WithRenderer(r), WithExtensions(CommonExtensions)}
|
||||
optList = append(optList, opts...)
|
||||
parser := New(optList...)
|
||||
ast := parser.Parse(input)
|
||||
var buf bytes.Buffer
|
||||
parser.renderer.RenderHeader(&buf, ast)
|
||||
ast.Walk(func(node *Node, entering bool) WalkStatus {
|
||||
return parser.renderer.RenderNode(&buf, node, entering)
|
||||
})
|
||||
parser.renderer.RenderFooter(&buf, ast)
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
// Parse is an entry point to the parsing part of Blackfriday. It takes an
|
||||
// input markdown document and produces a syntax tree for its contents. This
|
||||
// tree can then be rendered with a default or custom renderer, or
|
||||
// analyzed/transformed by the caller to whatever non-standard needs they have.
|
||||
// The return value is the root node of the syntax tree.
|
||||
func (p *Markdown) Parse(input []byte) *Node {
|
||||
p.block(input)
|
||||
// Walk the tree and finish up some of unfinished blocks
|
||||
for p.tip != nil {
|
||||
p.finalize(p.tip)
|
||||
}
|
||||
// Walk the tree again and process inline markdown in each block
|
||||
p.doc.Walk(func(node *Node, entering bool) WalkStatus {
|
||||
if node.Type == Paragraph || node.Type == Heading || node.Type == TableCell {
|
||||
p.inline(node, node.content)
|
||||
node.content = nil
|
||||
}
|
||||
return GoToNext
|
||||
})
|
||||
p.parseRefsToAST()
|
||||
return p.doc
|
||||
}
|
||||
|
||||
func (p *Markdown) parseRefsToAST() {
|
||||
if p.extensions&Footnotes == 0 || len(p.notes) == 0 {
|
||||
return
|
||||
}
|
||||
p.tip = p.doc
|
||||
block := p.addBlock(List, nil)
|
||||
block.IsFootnotesList = true
|
||||
block.ListFlags = ListTypeOrdered
|
||||
flags := ListItemBeginningOfList
|
||||
// Note: this loop is intentionally explicit, not range-form. This is
|
||||
// because the body of the loop will append nested footnotes to p.notes and
|
||||
// we need to process those late additions. Range form would only walk over
|
||||
// the fixed initial set.
|
||||
for i := 0; i < len(p.notes); i++ {
|
||||
ref := p.notes[i]
|
||||
p.addExistingChild(ref.footnote, 0)
|
||||
block := ref.footnote
|
||||
block.ListFlags = flags | ListTypeOrdered
|
||||
block.RefLink = ref.link
|
||||
if ref.hasBlock {
|
||||
flags |= ListItemContainsBlock
|
||||
p.block(ref.title)
|
||||
} else {
|
||||
p.inline(block, ref.title)
|
||||
}
|
||||
flags &^= ListItemBeginningOfList | ListItemContainsBlock
|
||||
}
|
||||
above := block.Parent
|
||||
finalizeList(block)
|
||||
p.tip = above
|
||||
block.Walk(func(node *Node, entering bool) WalkStatus {
|
||||
if node.Type == Paragraph || node.Type == Heading {
|
||||
p.inline(node, node.content)
|
||||
node.content = nil
|
||||
}
|
||||
return GoToNext
|
||||
})
|
||||
}
|
||||
|
||||
//
|
||||
// Link references
|
||||
//
|
||||
// This section implements support for references that (usually) appear
|
||||
// as footnotes in a document, and can be referenced anywhere in the document.
|
||||
// The basic format is:
|
||||
//
|
||||
// [1]: http://www.google.com/ "Google"
|
||||
// [2]: http://www.github.com/ "Github"
|
||||
//
|
||||
// Anywhere in the document, the reference can be linked by referring to its
|
||||
// label, i.e., 1 and 2 in this example, as in:
|
||||
//
|
||||
// This library is hosted on [Github][2], a git hosting site.
|
||||
//
|
||||
// Actual footnotes as specified in Pandoc and supported by some other Markdown
|
||||
// libraries such as php-markdown are also taken care of. They look like this:
|
||||
//
|
||||
// This sentence needs a bit of further explanation.[^note]
|
||||
//
|
||||
// [^note]: This is the explanation.
|
||||
//
|
||||
// Footnotes should be placed at the end of the document in an ordered list.
|
||||
// Finally, there are inline footnotes such as:
|
||||
//
|
||||
// Inline footnotes^[Also supported.] provide a quick inline explanation,
|
||||
// but are rendered at the bottom of the document.
|
||||
//
|
||||
|
||||
// reference holds all information necessary for a reference-style links or
|
||||
// footnotes.
|
||||
//
|
||||
// Consider this markdown with reference-style links:
|
||||
//
|
||||
// [link][ref]
|
||||
//
|
||||
// [ref]: /url/ "tooltip title"
|
||||
//
|
||||
// It will be ultimately converted to this HTML:
|
||||
//
|
||||
// <p><a href=\"/url/\" title=\"title\">link</a></p>
|
||||
//
|
||||
// And a reference structure will be populated as follows:
|
||||
//
|
||||
// p.refs["ref"] = &reference{
|
||||
// link: "/url/",
|
||||
// title: "tooltip title",
|
||||
// }
|
||||
//
|
||||
// Alternatively, reference can contain information about a footnote. Consider
|
||||
// this markdown:
|
||||
//
|
||||
// Text needing a footnote.[^a]
|
||||
//
|
||||
// [^a]: This is the note
|
||||
//
|
||||
// A reference structure will be populated as follows:
|
||||
//
|
||||
// p.refs["a"] = &reference{
|
||||
// link: "a",
|
||||
// title: "This is the note",
|
||||
// noteID: <some positive int>,
|
||||
// }
|
||||
//
|
||||
// TODO: As you can see, it begs for splitting into two dedicated structures
|
||||
// for refs and for footnotes.
|
||||
type reference struct {
|
||||
link []byte
|
||||
title []byte
|
||||
noteID int // 0 if not a footnote ref
|
||||
hasBlock bool
|
||||
footnote *Node // a link to the Item node within a list of footnotes
|
||||
|
||||
text []byte // only gets populated by refOverride feature with Reference.Text
|
||||
}
|
||||
|
||||
func (r *reference) String() string {
|
||||
return fmt.Sprintf("{link: %q, title: %q, text: %q, noteID: %d, hasBlock: %v}",
|
||||
r.link, r.title, r.text, r.noteID, r.hasBlock)
|
||||
}
|
||||
|
||||
// Check whether or not data starts with a reference link.
|
||||
// If so, it is parsed and stored in the list of references
|
||||
// (in the render struct).
|
||||
// Returns the number of bytes to skip to move past it,
|
||||
// or zero if the first line is not a reference.
|
||||
func isReference(p *Markdown, data []byte, tabSize int) int {
|
||||
// up to 3 optional leading spaces
|
||||
if len(data) < 4 {
|
||||
return 0
|
||||
}
|
||||
i := 0
|
||||
for i < 3 && data[i] == ' ' {
|
||||
i++
|
||||
}
|
||||
|
||||
noteID := 0
|
||||
|
||||
// id part: anything but a newline between brackets
|
||||
if data[i] != '[' {
|
||||
return 0
|
||||
}
|
||||
i++
|
||||
if p.extensions&Footnotes != 0 {
|
||||
if i < len(data) && data[i] == '^' {
|
||||
// we can set it to anything here because the proper noteIds will
|
||||
// be assigned later during the second pass. It just has to be != 0
|
||||
noteID = 1
|
||||
i++
|
||||
}
|
||||
}
|
||||
idOffset := i
|
||||
for i < len(data) && data[i] != '\n' && data[i] != '\r' && data[i] != ']' {
|
||||
i++
|
||||
}
|
||||
if i >= len(data) || data[i] != ']' {
|
||||
return 0
|
||||
}
|
||||
idEnd := i
|
||||
// footnotes can have empty ID, like this: [^], but a reference can not be
|
||||
// empty like this: []. Break early if it's not a footnote and there's no ID
|
||||
if noteID == 0 && idOffset == idEnd {
|
||||
return 0
|
||||
}
|
||||
// spacer: colon (space | tab)* newline? (space | tab)*
|
||||
i++
|
||||
if i >= len(data) || data[i] != ':' {
|
||||
return 0
|
||||
}
|
||||
i++
|
||||
for i < len(data) && (data[i] == ' ' || data[i] == '\t') {
|
||||
i++
|
||||
}
|
||||
if i < len(data) && (data[i] == '\n' || data[i] == '\r') {
|
||||
i++
|
||||
if i < len(data) && data[i] == '\n' && data[i-1] == '\r' {
|
||||
i++
|
||||
}
|
||||
}
|
||||
for i < len(data) && (data[i] == ' ' || data[i] == '\t') {
|
||||
i++
|
||||
}
|
||||
if i >= len(data) {
|
||||
return 0
|
||||
}
|
||||
|
||||
var (
|
||||
linkOffset, linkEnd int
|
||||
titleOffset, titleEnd int
|
||||
lineEnd int
|
||||
raw []byte
|
||||
hasBlock bool
|
||||
)
|
||||
|
||||
if p.extensions&Footnotes != 0 && noteID != 0 {
|
||||
linkOffset, linkEnd, raw, hasBlock = scanFootnote(p, data, i, tabSize)
|
||||
lineEnd = linkEnd
|
||||
} else {
|
||||
linkOffset, linkEnd, titleOffset, titleEnd, lineEnd = scanLinkRef(p, data, i)
|
||||
}
|
||||
if lineEnd == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
// a valid ref has been found
|
||||
|
||||
ref := &reference{
|
||||
noteID: noteID,
|
||||
hasBlock: hasBlock,
|
||||
}
|
||||
|
||||
if noteID > 0 {
|
||||
// reusing the link field for the id since footnotes don't have links
|
||||
ref.link = data[idOffset:idEnd]
|
||||
// if footnote, it's not really a title, it's the contained text
|
||||
ref.title = raw
|
||||
} else {
|
||||
ref.link = data[linkOffset:linkEnd]
|
||||
ref.title = data[titleOffset:titleEnd]
|
||||
}
|
||||
|
||||
// id matches are case-insensitive
|
||||
id := string(bytes.ToLower(data[idOffset:idEnd]))
|
||||
|
||||
p.refs[id] = ref
|
||||
|
||||
return lineEnd
|
||||
}
|
||||
|
||||
func scanLinkRef(p *Markdown, data []byte, i int) (linkOffset, linkEnd, titleOffset, titleEnd, lineEnd int) {
|
||||
// link: whitespace-free sequence, optionally between angle brackets
|
||||
if data[i] == '<' {
|
||||
i++
|
||||
}
|
||||
linkOffset = i
|
||||
for i < len(data) && data[i] != ' ' && data[i] != '\t' && data[i] != '\n' && data[i] != '\r' {
|
||||
i++
|
||||
}
|
||||
linkEnd = i
|
||||
if data[linkOffset] == '<' && data[linkEnd-1] == '>' {
|
||||
linkOffset++
|
||||
linkEnd--
|
||||
}
|
||||
|
||||
// optional spacer: (space | tab)* (newline | '\'' | '"' | '(' )
|
||||
for i < len(data) && (data[i] == ' ' || data[i] == '\t') {
|
||||
i++
|
||||
}
|
||||
if i < len(data) && data[i] != '\n' && data[i] != '\r' && data[i] != '\'' && data[i] != '"' && data[i] != '(' {
|
||||
return
|
||||
}
|
||||
|
||||
// compute end-of-line
|
||||
if i >= len(data) || data[i] == '\r' || data[i] == '\n' {
|
||||
lineEnd = i
|
||||
}
|
||||
if i+1 < len(data) && data[i] == '\r' && data[i+1] == '\n' {
|
||||
lineEnd++
|
||||
}
|
||||
|
||||
// optional (space|tab)* spacer after a newline
|
||||
if lineEnd > 0 {
|
||||
i = lineEnd + 1
|
||||
for i < len(data) && (data[i] == ' ' || data[i] == '\t') {
|
||||
i++
|
||||
}
|
||||
}
|
||||
|
||||
// optional title: any non-newline sequence enclosed in '"() alone on its line
|
||||
if i+1 < len(data) && (data[i] == '\'' || data[i] == '"' || data[i] == '(') {
|
||||
i++
|
||||
titleOffset = i
|
||||
|
||||
// look for EOL
|
||||
for i < len(data) && data[i] != '\n' && data[i] != '\r' {
|
||||
i++
|
||||
}
|
||||
if i+1 < len(data) && data[i] == '\n' && data[i+1] == '\r' {
|
||||
titleEnd = i + 1
|
||||
} else {
|
||||
titleEnd = i
|
||||
}
|
||||
|
||||
// step back
|
||||
i--
|
||||
for i > titleOffset && (data[i] == ' ' || data[i] == '\t') {
|
||||
i--
|
||||
}
|
||||
if i > titleOffset && (data[i] == '\'' || data[i] == '"' || data[i] == ')') {
|
||||
lineEnd = titleEnd
|
||||
titleEnd = i
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// The first bit of this logic is the same as Parser.listItem, but the rest
|
||||
// is much simpler. This function simply finds the entire block and shifts it
|
||||
// over by one tab if it is indeed a block (just returns the line if it's not).
|
||||
// blockEnd is the end of the section in the input buffer, and contents is the
|
||||
// extracted text that was shifted over one tab. It will need to be rendered at
|
||||
// the end of the document.
|
||||
func scanFootnote(p *Markdown, data []byte, i, indentSize int) (blockStart, blockEnd int, contents []byte, hasBlock bool) {
|
||||
if i == 0 || len(data) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
// skip leading whitespace on first line
|
||||
for i < len(data) && data[i] == ' ' {
|
||||
i++
|
||||
}
|
||||
|
||||
blockStart = i
|
||||
|
||||
// find the end of the line
|
||||
blockEnd = i
|
||||
for i < len(data) && data[i-1] != '\n' {
|
||||
i++
|
||||
}
|
||||
|
||||
// get working buffer
|
||||
var raw bytes.Buffer
|
||||
|
||||
// put the first line into the working buffer
|
||||
raw.Write(data[blockEnd:i])
|
||||
blockEnd = i
|
||||
|
||||
// process the following lines
|
||||
containsBlankLine := false
|
||||
|
||||
gatherLines:
|
||||
for blockEnd < len(data) {
|
||||
i++
|
||||
|
||||
// find the end of this line
|
||||
for i < len(data) && data[i-1] != '\n' {
|
||||
i++
|
||||
}
|
||||
|
||||
// if it is an empty line, guess that it is part of this item
|
||||
// and move on to the next line
|
||||
if p.isEmpty(data[blockEnd:i]) > 0 {
|
||||
containsBlankLine = true
|
||||
blockEnd = i
|
||||
continue
|
||||
}
|
||||
|
||||
n := 0
|
||||
if n = isIndented(data[blockEnd:i], indentSize); n == 0 {
|
||||
// this is the end of the block.
|
||||
// we don't want to include this last line in the index.
|
||||
break gatherLines
|
||||
}
|
||||
|
||||
// if there were blank lines before this one, insert a new one now
|
||||
if containsBlankLine {
|
||||
raw.WriteByte('\n')
|
||||
containsBlankLine = false
|
||||
}
|
||||
|
||||
// get rid of that first tab, write to buffer
|
||||
raw.Write(data[blockEnd+n : i])
|
||||
hasBlock = true
|
||||
|
||||
blockEnd = i
|
||||
}
|
||||
|
||||
if data[blockEnd-1] != '\n' {
|
||||
raw.WriteByte('\n')
|
||||
}
|
||||
|
||||
contents = raw.Bytes()
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
//
|
||||
//
|
||||
// Miscellaneous helper functions
|
||||
//
|
||||
//
|
||||
|
||||
// Test if a character is a punctuation symbol.
|
||||
// Taken from a private function in regexp in the stdlib.
|
||||
func ispunct(c byte) bool {
|
||||
for _, r := range []byte("!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~") {
|
||||
if c == r {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Test if a character is a whitespace character.
|
||||
func isspace(c byte) bool {
|
||||
return ishorizontalspace(c) || isverticalspace(c)
|
||||
}
|
||||
|
||||
// Test if a character is a horizontal whitespace character.
|
||||
func ishorizontalspace(c byte) bool {
|
||||
return c == ' ' || c == '\t'
|
||||
}
|
||||
|
||||
// Test if a character is a vertical character.
|
||||
func isverticalspace(c byte) bool {
|
||||
return c == '\n' || c == '\r' || c == '\f' || c == '\v'
|
||||
}
|
||||
|
||||
// Test if a character is letter.
|
||||
func isletter(c byte) bool {
|
||||
return (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z')
|
||||
}
|
||||
|
||||
// Test if a character is a letter or a digit.
|
||||
// TODO: check when this is looking for ASCII alnum and when it should use unicode
|
||||
func isalnum(c byte) bool {
|
||||
return (c >= '0' && c <= '9') || isletter(c)
|
||||
}
|
||||
|
||||
// Replace tab characters with spaces, aligning to the next TAB_SIZE column.
|
||||
// always ends output with a newline
|
||||
func expandTabs(out *bytes.Buffer, line []byte, tabSize int) {
|
||||
// first, check for common cases: no tabs, or only tabs at beginning of line
|
||||
i, prefix := 0, 0
|
||||
slowcase := false
|
||||
for i = 0; i < len(line); i++ {
|
||||
if line[i] == '\t' {
|
||||
if prefix == i {
|
||||
prefix++
|
||||
} else {
|
||||
slowcase = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// no need to decode runes if all tabs are at the beginning of the line
|
||||
if !slowcase {
|
||||
for i = 0; i < prefix*tabSize; i++ {
|
||||
out.WriteByte(' ')
|
||||
}
|
||||
out.Write(line[prefix:])
|
||||
return
|
||||
}
|
||||
|
||||
// the slow case: we need to count runes to figure out how
|
||||
// many spaces to insert for each tab
|
||||
column := 0
|
||||
i = 0
|
||||
for i < len(line) {
|
||||
start := i
|
||||
for i < len(line) && line[i] != '\t' {
|
||||
_, size := utf8.DecodeRune(line[i:])
|
||||
i += size
|
||||
column++
|
||||
}
|
||||
|
||||
if i > start {
|
||||
out.Write(line[start:i])
|
||||
}
|
||||
|
||||
if i >= len(line) {
|
||||
break
|
||||
}
|
||||
|
||||
for {
|
||||
out.WriteByte(' ')
|
||||
column++
|
||||
if column%tabSize == 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
i++
|
||||
}
|
||||
}
|
||||
|
||||
// Find if a line counts as indented or not.
|
||||
// Returns number of characters the indent is (0 = not indented).
|
||||
func isIndented(data []byte, indentSize int) int {
|
||||
if len(data) == 0 {
|
||||
return 0
|
||||
}
|
||||
if data[0] == '\t' {
|
||||
return 1
|
||||
}
|
||||
if len(data) < indentSize {
|
||||
return 0
|
||||
}
|
||||
for i := 0; i < indentSize; i++ {
|
||||
if data[i] != ' ' {
|
||||
return 0
|
||||
}
|
||||
}
|
||||
return indentSize
|
||||
}
|
||||
|
||||
// Create a url-safe slug for fragments
|
||||
func slugify(in []byte) []byte {
|
||||
if len(in) == 0 {
|
||||
return in
|
||||
}
|
||||
out := make([]byte, 0, len(in))
|
||||
sym := false
|
||||
|
||||
for _, ch := range in {
|
||||
if isalnum(ch) {
|
||||
sym = false
|
||||
out = append(out, ch)
|
||||
} else if sym {
|
||||
continue
|
||||
} else {
|
||||
out = append(out, '-')
|
||||
sym = true
|
||||
}
|
||||
}
|
||||
var a, b int
|
||||
var ch byte
|
||||
for a, ch = range out {
|
||||
if ch != '-' {
|
||||
break
|
||||
}
|
||||
}
|
||||
for b = len(out) - 1; b > 0; b-- {
|
||||
if out[b] != '-' {
|
||||
break
|
||||
}
|
||||
}
|
||||
return out[a : b+1]
|
||||
}
|
360
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/node.go
generated
vendored
Normal file
360
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/node.go
generated
vendored
Normal file
@ -0,0 +1,360 @@
|
||||
package blackfriday
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// NodeType specifies a type of a single node of a syntax tree. Usually one
|
||||
// node (and its type) corresponds to a single markdown feature, e.g. emphasis
|
||||
// or code block.
|
||||
type NodeType int
|
||||
|
||||
// Constants for identifying different types of nodes. See NodeType.
|
||||
const (
|
||||
Document NodeType = iota
|
||||
BlockQuote
|
||||
List
|
||||
Item
|
||||
Paragraph
|
||||
Heading
|
||||
HorizontalRule
|
||||
Emph
|
||||
Strong
|
||||
Del
|
||||
Link
|
||||
Image
|
||||
Text
|
||||
HTMLBlock
|
||||
CodeBlock
|
||||
Softbreak
|
||||
Hardbreak
|
||||
Code
|
||||
HTMLSpan
|
||||
Table
|
||||
TableCell
|
||||
TableHead
|
||||
TableBody
|
||||
TableRow
|
||||
)
|
||||
|
||||
var nodeTypeNames = []string{
|
||||
Document: "Document",
|
||||
BlockQuote: "BlockQuote",
|
||||
List: "List",
|
||||
Item: "Item",
|
||||
Paragraph: "Paragraph",
|
||||
Heading: "Heading",
|
||||
HorizontalRule: "HorizontalRule",
|
||||
Emph: "Emph",
|
||||
Strong: "Strong",
|
||||
Del: "Del",
|
||||
Link: "Link",
|
||||
Image: "Image",
|
||||
Text: "Text",
|
||||
HTMLBlock: "HTMLBlock",
|
||||
CodeBlock: "CodeBlock",
|
||||
Softbreak: "Softbreak",
|
||||
Hardbreak: "Hardbreak",
|
||||
Code: "Code",
|
||||
HTMLSpan: "HTMLSpan",
|
||||
Table: "Table",
|
||||
TableCell: "TableCell",
|
||||
TableHead: "TableHead",
|
||||
TableBody: "TableBody",
|
||||
TableRow: "TableRow",
|
||||
}
|
||||
|
||||
func (t NodeType) String() string {
|
||||
return nodeTypeNames[t]
|
||||
}
|
||||
|
||||
// ListData contains fields relevant to a List and Item node type.
|
||||
type ListData struct {
|
||||
ListFlags ListType
|
||||
Tight bool // Skip <p>s around list item data if true
|
||||
BulletChar byte // '*', '+' or '-' in bullet lists
|
||||
Delimiter byte // '.' or ')' after the number in ordered lists
|
||||
RefLink []byte // If not nil, turns this list item into a footnote item and triggers different rendering
|
||||
IsFootnotesList bool // This is a list of footnotes
|
||||
}
|
||||
|
||||
// LinkData contains fields relevant to a Link node type.
|
||||
type LinkData struct {
|
||||
Destination []byte // Destination is what goes into a href
|
||||
Title []byte // Title is the tooltip thing that goes in a title attribute
|
||||
NoteID int // NoteID contains a serial number of a footnote, zero if it's not a footnote
|
||||
Footnote *Node // If it's a footnote, this is a direct link to the footnote Node. Otherwise nil.
|
||||
}
|
||||
|
||||
// CodeBlockData contains fields relevant to a CodeBlock node type.
|
||||
type CodeBlockData struct {
|
||||
IsFenced bool // Specifies whether it's a fenced code block or an indented one
|
||||
Info []byte // This holds the info string
|
||||
FenceChar byte
|
||||
FenceLength int
|
||||
FenceOffset int
|
||||
}
|
||||
|
||||
// TableCellData contains fields relevant to a TableCell node type.
|
||||
type TableCellData struct {
|
||||
IsHeader bool // This tells if it's under the header row
|
||||
Align CellAlignFlags // This holds the value for align attribute
|
||||
}
|
||||
|
||||
// HeadingData contains fields relevant to a Heading node type.
|
||||
type HeadingData struct {
|
||||
Level int // This holds the heading level number
|
||||
HeadingID string // This might hold heading ID, if present
|
||||
IsTitleblock bool // Specifies whether it's a title block
|
||||
}
|
||||
|
||||
// Node is a single element in the abstract syntax tree of the parsed document.
|
||||
// It holds connections to the structurally neighboring nodes and, for certain
|
||||
// types of nodes, additional information that might be needed when rendering.
|
||||
type Node struct {
|
||||
Type NodeType // Determines the type of the node
|
||||
Parent *Node // Points to the parent
|
||||
FirstChild *Node // Points to the first child, if any
|
||||
LastChild *Node // Points to the last child, if any
|
||||
Prev *Node // Previous sibling; nil if it's the first child
|
||||
Next *Node // Next sibling; nil if it's the last child
|
||||
|
||||
Literal []byte // Text contents of the leaf nodes
|
||||
|
||||
HeadingData // Populated if Type is Heading
|
||||
ListData // Populated if Type is List
|
||||
CodeBlockData // Populated if Type is CodeBlock
|
||||
LinkData // Populated if Type is Link
|
||||
TableCellData // Populated if Type is TableCell
|
||||
|
||||
content []byte // Markdown content of the block nodes
|
||||
open bool // Specifies an open block node that has not been finished to process yet
|
||||
}
|
||||
|
||||
// NewNode allocates a node of a specified type.
|
||||
func NewNode(typ NodeType) *Node {
|
||||
return &Node{
|
||||
Type: typ,
|
||||
open: true,
|
||||
}
|
||||
}
|
||||
|
||||
func (n *Node) String() string {
|
||||
ellipsis := ""
|
||||
snippet := n.Literal
|
||||
if len(snippet) > 16 {
|
||||
snippet = snippet[:16]
|
||||
ellipsis = "..."
|
||||
}
|
||||
return fmt.Sprintf("%s: '%s%s'", n.Type, snippet, ellipsis)
|
||||
}
|
||||
|
||||
// Unlink removes node 'n' from the tree.
|
||||
// It panics if the node is nil.
|
||||
func (n *Node) Unlink() {
|
||||
if n.Prev != nil {
|
||||
n.Prev.Next = n.Next
|
||||
} else if n.Parent != nil {
|
||||
n.Parent.FirstChild = n.Next
|
||||
}
|
||||
if n.Next != nil {
|
||||
n.Next.Prev = n.Prev
|
||||
} else if n.Parent != nil {
|
||||
n.Parent.LastChild = n.Prev
|
||||
}
|
||||
n.Parent = nil
|
||||
n.Next = nil
|
||||
n.Prev = nil
|
||||
}
|
||||
|
||||
// AppendChild adds a node 'child' as a child of 'n'.
|
||||
// It panics if either node is nil.
|
||||
func (n *Node) AppendChild(child *Node) {
|
||||
child.Unlink()
|
||||
child.Parent = n
|
||||
if n.LastChild != nil {
|
||||
n.LastChild.Next = child
|
||||
child.Prev = n.LastChild
|
||||
n.LastChild = child
|
||||
} else {
|
||||
n.FirstChild = child
|
||||
n.LastChild = child
|
||||
}
|
||||
}
|
||||
|
||||
// InsertBefore inserts 'sibling' immediately before 'n'.
|
||||
// It panics if either node is nil.
|
||||
func (n *Node) InsertBefore(sibling *Node) {
|
||||
sibling.Unlink()
|
||||
sibling.Prev = n.Prev
|
||||
if sibling.Prev != nil {
|
||||
sibling.Prev.Next = sibling
|
||||
}
|
||||
sibling.Next = n
|
||||
n.Prev = sibling
|
||||
sibling.Parent = n.Parent
|
||||
if sibling.Prev == nil {
|
||||
sibling.Parent.FirstChild = sibling
|
||||
}
|
||||
}
|
||||
|
||||
// IsContainer returns true if 'n' can contain children.
|
||||
func (n *Node) IsContainer() bool {
|
||||
switch n.Type {
|
||||
case Document:
|
||||
fallthrough
|
||||
case BlockQuote:
|
||||
fallthrough
|
||||
case List:
|
||||
fallthrough
|
||||
case Item:
|
||||
fallthrough
|
||||
case Paragraph:
|
||||
fallthrough
|
||||
case Heading:
|
||||
fallthrough
|
||||
case Emph:
|
||||
fallthrough
|
||||
case Strong:
|
||||
fallthrough
|
||||
case Del:
|
||||
fallthrough
|
||||
case Link:
|
||||
fallthrough
|
||||
case Image:
|
||||
fallthrough
|
||||
case Table:
|
||||
fallthrough
|
||||
case TableHead:
|
||||
fallthrough
|
||||
case TableBody:
|
||||
fallthrough
|
||||
case TableRow:
|
||||
fallthrough
|
||||
case TableCell:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// IsLeaf returns true if 'n' is a leaf node.
|
||||
func (n *Node) IsLeaf() bool {
|
||||
return !n.IsContainer()
|
||||
}
|
||||
|
||||
func (n *Node) canContain(t NodeType) bool {
|
||||
if n.Type == List {
|
||||
return t == Item
|
||||
}
|
||||
if n.Type == Document || n.Type == BlockQuote || n.Type == Item {
|
||||
return t != Item
|
||||
}
|
||||
if n.Type == Table {
|
||||
return t == TableHead || t == TableBody
|
||||
}
|
||||
if n.Type == TableHead || n.Type == TableBody {
|
||||
return t == TableRow
|
||||
}
|
||||
if n.Type == TableRow {
|
||||
return t == TableCell
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// WalkStatus allows NodeVisitor to have some control over the tree traversal.
|
||||
// It is returned from NodeVisitor and different values allow Node.Walk to
|
||||
// decide which node to go to next.
|
||||
type WalkStatus int
|
||||
|
||||
const (
|
||||
// GoToNext is the default traversal of every node.
|
||||
GoToNext WalkStatus = iota
|
||||
// SkipChildren tells walker to skip all children of current node.
|
||||
SkipChildren
|
||||
// Terminate tells walker to terminate the traversal.
|
||||
Terminate
|
||||
)
|
||||
|
||||
// NodeVisitor is a callback to be called when traversing the syntax tree.
|
||||
// Called twice for every node: once with entering=true when the branch is
|
||||
// first visited, then with entering=false after all the children are done.
|
||||
type NodeVisitor func(node *Node, entering bool) WalkStatus
|
||||
|
||||
// Walk is a convenience method that instantiates a walker and starts a
|
||||
// traversal of subtree rooted at n.
|
||||
func (n *Node) Walk(visitor NodeVisitor) {
|
||||
w := newNodeWalker(n)
|
||||
for w.current != nil {
|
||||
status := visitor(w.current, w.entering)
|
||||
switch status {
|
||||
case GoToNext:
|
||||
w.next()
|
||||
case SkipChildren:
|
||||
w.entering = false
|
||||
w.next()
|
||||
case Terminate:
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type nodeWalker struct {
|
||||
current *Node
|
||||
root *Node
|
||||
entering bool
|
||||
}
|
||||
|
||||
func newNodeWalker(root *Node) *nodeWalker {
|
||||
return &nodeWalker{
|
||||
current: root,
|
||||
root: root,
|
||||
entering: true,
|
||||
}
|
||||
}
|
||||
|
||||
func (nw *nodeWalker) next() {
|
||||
if (!nw.current.IsContainer() || !nw.entering) && nw.current == nw.root {
|
||||
nw.current = nil
|
||||
return
|
||||
}
|
||||
if nw.entering && nw.current.IsContainer() {
|
||||
if nw.current.FirstChild != nil {
|
||||
nw.current = nw.current.FirstChild
|
||||
nw.entering = true
|
||||
} else {
|
||||
nw.entering = false
|
||||
}
|
||||
} else if nw.current.Next == nil {
|
||||
nw.current = nw.current.Parent
|
||||
nw.entering = false
|
||||
} else {
|
||||
nw.current = nw.current.Next
|
||||
nw.entering = true
|
||||
}
|
||||
}
|
||||
|
||||
func dump(ast *Node) {
|
||||
fmt.Println(dumpString(ast))
|
||||
}
|
||||
|
||||
func dumpR(ast *Node, depth int) string {
|
||||
if ast == nil {
|
||||
return ""
|
||||
}
|
||||
indent := bytes.Repeat([]byte("\t"), depth)
|
||||
content := ast.Literal
|
||||
if content == nil {
|
||||
content = ast.content
|
||||
}
|
||||
result := fmt.Sprintf("%s%s(%q)\n", indent, ast.Type, content)
|
||||
for n := ast.FirstChild; n != nil; n = n.Next {
|
||||
result += dumpR(n, depth+1)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func dumpString(ast *Node) string {
|
||||
return dumpR(ast, 0)
|
||||
}
|
457
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/smartypants.go
generated
vendored
Normal file
457
endgamefiles/sourcecode/gobalance/vendor/github.com/russross/blackfriday/v2/smartypants.go
generated
vendored
Normal file
@ -0,0 +1,457 @@
|
||||
//
|
||||
// Blackfriday Markdown Processor
|
||||
// Available at http://github.com/russross/blackfriday
|
||||
//
|
||||
// Copyright © 2011 Russ Ross <russ@russross.com>.
|
||||
// Distributed under the Simplified BSD License.
|
||||
// See README.md for details.
|
||||
//
|
||||
|
||||
//
|
||||
//
|
||||
// SmartyPants rendering
|
||||
//
|
||||
//
|
||||
|
||||
package blackfriday
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
)
|
||||
|
||||
// SPRenderer is a struct containing state of a Smartypants renderer.
|
||||
type SPRenderer struct {
|
||||
inSingleQuote bool
|
||||
inDoubleQuote bool
|
||||
callbacks [256]smartCallback
|
||||
}
|
||||
|
||||
func wordBoundary(c byte) bool {
|
||||
return c == 0 || isspace(c) || ispunct(c)
|
||||
}
|
||||
|
||||
func tolower(c byte) byte {
|
||||
if c >= 'A' && c <= 'Z' {
|
||||
return c - 'A' + 'a'
|
||||
}
|
||||
return c
|
||||
}
|
||||
|
||||
func isdigit(c byte) bool {
|
||||
return c >= '0' && c <= '9'
|
||||
}
|
||||
|
||||
func smartQuoteHelper(out *bytes.Buffer, previousChar byte, nextChar byte, quote byte, isOpen *bool, addNBSP bool) bool {
|
||||
// edge of the buffer is likely to be a tag that we don't get to see,
|
||||
// so we treat it like text sometimes
|
||||
|
||||
// enumerate all sixteen possibilities for (previousChar, nextChar)
|
||||
// each can be one of {0, space, punct, other}
|
||||
switch {
|
||||
case previousChar == 0 && nextChar == 0:
|
||||
// context is not any help here, so toggle
|
||||
*isOpen = !*isOpen
|
||||
case isspace(previousChar) && nextChar == 0:
|
||||
// [ "] might be [ "<code>foo...]
|
||||
*isOpen = true
|
||||
case ispunct(previousChar) && nextChar == 0:
|
||||
// [!"] hmm... could be [Run!"] or [("<code>...]
|
||||
*isOpen = false
|
||||
case /* isnormal(previousChar) && */ nextChar == 0:
|
||||
// [a"] is probably a close
|
||||
*isOpen = false
|
||||
case previousChar == 0 && isspace(nextChar):
|
||||
// [" ] might be [...foo</code>" ]
|
||||
*isOpen = false
|
||||
case isspace(previousChar) && isspace(nextChar):
|
||||
// [ " ] context is not any help here, so toggle
|
||||
*isOpen = !*isOpen
|
||||
case ispunct(previousChar) && isspace(nextChar):
|
||||
// [!" ] is probably a close
|
||||
*isOpen = false
|
||||
case /* isnormal(previousChar) && */ isspace(nextChar):
|
||||
// [a" ] this is one of the easy cases
|
||||
*isOpen = false
|
||||
case previousChar == 0 && ispunct(nextChar):
|
||||
// ["!] hmm... could be ["$1.95] or [</code>"!...]
|
||||
*isOpen = false
|
||||
case isspace(previousChar) && ispunct(nextChar):
|
||||
// [ "!] looks more like [ "$1.95]
|
||||
*isOpen = true
|
||||
case ispunct(previousChar) && ispunct(nextChar):
|
||||
// [!"!] context is not any help here, so toggle
|
||||
*isOpen = !*isOpen
|
||||
case /* isnormal(previousChar) && */ ispunct(nextChar):
|
||||
// [a"!] is probably a close
|
||||
*isOpen = false
|
||||
case previousChar == 0 /* && isnormal(nextChar) */ :
|
||||
// ["a] is probably an open
|
||||
*isOpen = true
|
||||
case isspace(previousChar) /* && isnormal(nextChar) */ :
|
||||
// [ "a] this is one of the easy cases
|
||||
*isOpen = true
|
||||
case ispunct(previousChar) /* && isnormal(nextChar) */ :
|
||||
// [!"a] is probably an open
|
||||
*isOpen = true
|
||||
default:
|
||||
// [a'b] maybe a contraction?
|
||||
*isOpen = false
|
||||
}
|
||||
|
||||
// Note that with the limited lookahead, this non-breaking
|
||||
// space will also be appended to single double quotes.
|
||||
if addNBSP && !*isOpen {
|
||||
out.WriteString(" ")
|
||||
}
|
||||
|
||||
out.WriteByte('&')
|
||||
if *isOpen {
|
||||
out.WriteByte('l')
|
||||
} else {
|
||||
out.WriteByte('r')
|
||||
}
|
||||
out.WriteByte(quote)
|
||||
out.WriteString("quo;")
|
||||
|
||||
if addNBSP && *isOpen {
|
||||
out.WriteString(" ")
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartSingleQuote(out *bytes.Buffer, previousChar byte, text []byte) int {
|
||||
if len(text) >= 2 {
|
||||
t1 := tolower(text[1])
|
||||
|
||||
if t1 == '\'' {
|
||||
nextChar := byte(0)
|
||||
if len(text) >= 3 {
|
||||
nextChar = text[2]
|
||||
}
|
||||
if smartQuoteHelper(out, previousChar, nextChar, 'd', &r.inDoubleQuote, false) {
|
||||
return 1
|
||||
}
|
||||
}
|
||||
|
||||
if (t1 == 's' || t1 == 't' || t1 == 'm' || t1 == 'd') && (len(text) < 3 || wordBoundary(text[2])) {
|
||||
out.WriteString("’")
|
||||
return 0
|
||||
}
|
||||
|
||||
if len(text) >= 3 {
|
||||
t2 := tolower(text[2])
|
||||
|
||||
if ((t1 == 'r' && t2 == 'e') || (t1 == 'l' && t2 == 'l') || (t1 == 'v' && t2 == 'e')) &&
|
||||
(len(text) < 4 || wordBoundary(text[3])) {
|
||||
out.WriteString("’")
|
||||
return 0
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
nextChar := byte(0)
|
||||
if len(text) > 1 {
|
||||
nextChar = text[1]
|
||||
}
|
||||
if smartQuoteHelper(out, previousChar, nextChar, 's', &r.inSingleQuote, false) {
|
||||
return 0
|
||||
}
|
||||
|
||||
out.WriteByte(text[0])
|
||||
return 0
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartParens(out *bytes.Buffer, previousChar byte, text []byte) int {
|
||||
if len(text) >= 3 {
|
||||
t1 := tolower(text[1])
|
||||
t2 := tolower(text[2])
|
||||
|
||||
if t1 == 'c' && t2 == ')' {
|
||||
out.WriteString("©")
|
||||
return 2
|
||||
}
|
||||
|
||||
if t1 == 'r' && t2 == ')' {
|
||||
out.WriteString("®")
|
||||
return 2
|
||||
}
|
||||
|
||||
if len(text) >= 4 && t1 == 't' && t2 == 'm' && text[3] == ')' {
|
||||
out.WriteString("™")
|
||||
return 3
|
||||
}
|
||||
}
|
||||
|
||||
out.WriteByte(text[0])
|
||||
return 0
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartDash(out *bytes.Buffer, previousChar byte, text []byte) int {
|
||||
if len(text) >= 2 {
|
||||
if text[1] == '-' {
|
||||
out.WriteString("—")
|
||||
return 1
|
||||
}
|
||||
|
||||
if wordBoundary(previousChar) && wordBoundary(text[1]) {
|
||||
out.WriteString("–")
|
||||
return 0
|
||||
}
|
||||
}
|
||||
|
||||
out.WriteByte(text[0])
|
||||
return 0
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartDashLatex(out *bytes.Buffer, previousChar byte, text []byte) int {
|
||||
if len(text) >= 3 && text[1] == '-' && text[2] == '-' {
|
||||
out.WriteString("—")
|
||||
return 2
|
||||
}
|
||||
if len(text) >= 2 && text[1] == '-' {
|
||||
out.WriteString("–")
|
||||
return 1
|
||||
}
|
||||
|
||||
out.WriteByte(text[0])
|
||||
return 0
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartAmpVariant(out *bytes.Buffer, previousChar byte, text []byte, quote byte, addNBSP bool) int {
|
||||
if bytes.HasPrefix(text, []byte(""")) {
|
||||
nextChar := byte(0)
|
||||
if len(text) >= 7 {
|
||||
nextChar = text[6]
|
||||
}
|
||||
if smartQuoteHelper(out, previousChar, nextChar, quote, &r.inDoubleQuote, addNBSP) {
|
||||
return 5
|
||||
}
|
||||
}
|
||||
|
||||
if bytes.HasPrefix(text, []byte("�")) {
|
||||
return 3
|
||||
}
|
||||
|
||||
out.WriteByte('&')
|
||||
return 0
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartAmp(angledQuotes, addNBSP bool) func(*bytes.Buffer, byte, []byte) int {
|
||||
var quote byte = 'd'
|
||||
if angledQuotes {
|
||||
quote = 'a'
|
||||
}
|
||||
|
||||
return func(out *bytes.Buffer, previousChar byte, text []byte) int {
|
||||
return r.smartAmpVariant(out, previousChar, text, quote, addNBSP)
|
||||
}
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartPeriod(out *bytes.Buffer, previousChar byte, text []byte) int {
|
||||
if len(text) >= 3 && text[1] == '.' && text[2] == '.' {
|
||||
out.WriteString("…")
|
||||
return 2
|
||||
}
|
||||
|
||||
if len(text) >= 5 && text[1] == ' ' && text[2] == '.' && text[3] == ' ' && text[4] == '.' {
|
||||
out.WriteString("…")
|
||||
return 4
|
||||
}
|
||||
|
||||
out.WriteByte(text[0])
|
||||
return 0
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartBacktick(out *bytes.Buffer, previousChar byte, text []byte) int {
|
||||
if len(text) >= 2 && text[1] == '`' {
|
||||
nextChar := byte(0)
|
||||
if len(text) >= 3 {
|
||||
nextChar = text[2]
|
||||
}
|
||||
if smartQuoteHelper(out, previousChar, nextChar, 'd', &r.inDoubleQuote, false) {
|
||||
return 1
|
||||
}
|
||||
}
|
||||
|
||||
out.WriteByte(text[0])
|
||||
return 0
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartNumberGeneric(out *bytes.Buffer, previousChar byte, text []byte) int {
|
||||
if wordBoundary(previousChar) && previousChar != '/' && len(text) >= 3 {
|
||||
// is it of the form digits/digits(word boundary)?, i.e., \d+/\d+\b
|
||||
// note: check for regular slash (/) or fraction slash (⁄, 0x2044, or 0xe2 81 84 in utf-8)
|
||||
// and avoid changing dates like 1/23/2005 into fractions.
|
||||
numEnd := 0
|
||||
for len(text) > numEnd && isdigit(text[numEnd]) {
|
||||
numEnd++
|
||||
}
|
||||
if numEnd == 0 {
|
||||
out.WriteByte(text[0])
|
||||
return 0
|
||||
}
|
||||
denStart := numEnd + 1
|
||||
if len(text) > numEnd+3 && text[numEnd] == 0xe2 && text[numEnd+1] == 0x81 && text[numEnd+2] == 0x84 {
|
||||
denStart = numEnd + 3
|
||||
} else if len(text) < numEnd+2 || text[numEnd] != '/' {
|
||||
out.WriteByte(text[0])
|
||||
return 0
|
||||
}
|
||||
denEnd := denStart
|
||||
for len(text) > denEnd && isdigit(text[denEnd]) {
|
||||
denEnd++
|
||||
}
|
||||
if denEnd == denStart {
|
||||
out.WriteByte(text[0])
|
||||
return 0
|
||||
}
|
||||
if len(text) == denEnd || wordBoundary(text[denEnd]) && text[denEnd] != '/' {
|
||||
out.WriteString("<sup>")
|
||||
out.Write(text[:numEnd])
|
||||
out.WriteString("</sup>⁄<sub>")
|
||||
out.Write(text[denStart:denEnd])
|
||||
out.WriteString("</sub>")
|
||||
return denEnd - 1
|
||||
}
|
||||
}
|
||||
|
||||
out.WriteByte(text[0])
|
||||
return 0
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartNumber(out *bytes.Buffer, previousChar byte, text []byte) int {
|
||||
if wordBoundary(previousChar) && previousChar != '/' && len(text) >= 3 {
|
||||
if text[0] == '1' && text[1] == '/' && text[2] == '2' {
|
||||
if len(text) < 4 || wordBoundary(text[3]) && text[3] != '/' {
|
||||
out.WriteString("½")
|
||||
return 2
|
||||
}
|
||||
}
|
||||
|
||||
if text[0] == '1' && text[1] == '/' && text[2] == '4' {
|
||||
if len(text) < 4 || wordBoundary(text[3]) && text[3] != '/' || (len(text) >= 5 && tolower(text[3]) == 't' && tolower(text[4]) == 'h') {
|
||||
out.WriteString("¼")
|
||||
return 2
|
||||
}
|
||||
}
|
||||
|
||||
if text[0] == '3' && text[1] == '/' && text[2] == '4' {
|
||||
if len(text) < 4 || wordBoundary(text[3]) && text[3] != '/' || (len(text) >= 6 && tolower(text[3]) == 't' && tolower(text[4]) == 'h' && tolower(text[5]) == 's') {
|
||||
out.WriteString("¾")
|
||||
return 2
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
out.WriteByte(text[0])
|
||||
return 0
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartDoubleQuoteVariant(out *bytes.Buffer, previousChar byte, text []byte, quote byte) int {
|
||||
nextChar := byte(0)
|
||||
if len(text) > 1 {
|
||||
nextChar = text[1]
|
||||
}
|
||||
if !smartQuoteHelper(out, previousChar, nextChar, quote, &r.inDoubleQuote, false) {
|
||||
out.WriteString(""")
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartDoubleQuote(out *bytes.Buffer, previousChar byte, text []byte) int {
|
||||
return r.smartDoubleQuoteVariant(out, previousChar, text, 'd')
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartAngledDoubleQuote(out *bytes.Buffer, previousChar byte, text []byte) int {
|
||||
return r.smartDoubleQuoteVariant(out, previousChar, text, 'a')
|
||||
}
|
||||
|
||||
func (r *SPRenderer) smartLeftAngle(out *bytes.Buffer, previousChar byte, text []byte) int {
|
||||
i := 0
|
||||
|
||||
for i < len(text) && text[i] != '>' {
|
||||
i++
|
||||
}
|
||||
|
||||
out.Write(text[:i+1])
|
||||
return i
|
||||
}
|
||||
|
||||
type smartCallback func(out *bytes.Buffer, previousChar byte, text []byte) int
|
||||
|
||||
// NewSmartypantsRenderer constructs a Smartypants renderer object.
|
||||
func NewSmartypantsRenderer(flags HTMLFlags) *SPRenderer {
|
||||
var (
|
||||
r SPRenderer
|
||||
|
||||
smartAmpAngled = r.smartAmp(true, false)
|
||||
smartAmpAngledNBSP = r.smartAmp(true, true)
|
||||
smartAmpRegular = r.smartAmp(false, false)
|
||||
smartAmpRegularNBSP = r.smartAmp(false, true)
|
||||
|
||||
addNBSP = flags&SmartypantsQuotesNBSP != 0
|
||||
)
|
||||
|
||||
if flags&SmartypantsAngledQuotes == 0 {
|
||||
r.callbacks['"'] = r.smartDoubleQuote
|
||||
if !addNBSP {
|
||||
r.callbacks['&'] = smartAmpRegular
|
||||
} else {
|
||||
r.callbacks['&'] = smartAmpRegularNBSP
|
||||
}
|
||||
} else {
|
||||
r.callbacks['"'] = r.smartAngledDoubleQuote
|
||||
if !addNBSP {
|
||||
r.callbacks['&'] = smartAmpAngled
|
||||
} else {
|
||||
r.callbacks['&'] = smartAmpAngledNBSP
|
||||
}
|
||||
}
|
||||
r.callbacks['\''] = r.smartSingleQuote
|
||||
r.callbacks['('] = r.smartParens
|
||||
if flags&SmartypantsDashes != 0 {
|
||||
if flags&SmartypantsLatexDashes == 0 {
|
||||
r.callbacks['-'] = r.smartDash
|
||||
} else {
|
||||
r.callbacks['-'] = r.smartDashLatex
|
||||
}
|
||||
}
|
||||
r.callbacks['.'] = r.smartPeriod
|
||||
if flags&SmartypantsFractions == 0 {
|
||||
r.callbacks['1'] = r.smartNumber
|
||||
r.callbacks['3'] = r.smartNumber
|
||||
} else {
|
||||
for ch := '1'; ch <= '9'; ch++ {
|
||||
r.callbacks[ch] = r.smartNumberGeneric
|
||||
}
|
||||
}
|
||||
r.callbacks['<'] = r.smartLeftAngle
|
||||
r.callbacks['`'] = r.smartBacktick
|
||||
return &r
|
||||
}
|
||||
|
||||
// Process is the entry point of the Smartypants renderer.
|
||||
func (r *SPRenderer) Process(w io.Writer, text []byte) {
|
||||
mark := 0
|
||||
for i := 0; i < len(text); i++ {
|
||||
if action := r.callbacks[text[i]]; action != nil {
|
||||
if i > mark {
|
||||
w.Write(text[mark:i])
|
||||
}
|
||||
previousChar := byte(0)
|
||||
if i > 0 {
|
||||
previousChar = text[i-1]
|
||||
}
|
||||
var tmp bytes.Buffer
|
||||
i += action(&tmp, previousChar, text[i:])
|
||||
w.Write(tmp.Bytes())
|
||||
mark = i + 1
|
||||
}
|
||||
}
|
||||
if mark < len(text) {
|
||||
w.Write(text[mark:])
|
||||
}
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user