Fixed tcp
This commit is contained in:
parent
10d301d784
commit
050588cb73
3 changed files with 266 additions and 331 deletions
73
README.md
73
README.md
|
@ -3,10 +3,13 @@
|
||||||
Read and manipulate tox profile files. It started as a simple script from
|
Read and manipulate tox profile files. It started as a simple script from
|
||||||
<https://stackoverflow.com/questions/30901873/what-format-are-tox-files-stored-in>
|
<https://stackoverflow.com/questions/30901873/what-format-are-tox-files-stored-in>
|
||||||
|
|
||||||
```tox_savefile.py``` reads a Tox profile and prints to stderr various
|
```tox_profile.py``` reads a Tox profile and prints to stderr various
|
||||||
things that it finds. Then can write what it found in JSON/YAML/REPR/PPRINT
|
things that it finds. Then can write what it found in JSON/YAML/REPR/PPRINT
|
||||||
to a file. It can also test the nodes in a profile using ```nmap```.
|
to a file. It can also test the nodes in a profile using ```nmap```.
|
||||||
|
|
||||||
|
( There are somtimes problems with the json info dump of bytes keys:
|
||||||
|
```TypeError: Object of type bytes is not JSON serializable```)
|
||||||
|
|
||||||
It can also download, select, or test nodes in a ```DHTnode.json``` file.
|
It can also download, select, or test nodes in a ```DHTnode.json``` file.
|
||||||
|
|
||||||
It can also decrypt a profile, saving the output to a file.
|
It can also decrypt a profile, saving the output to a file.
|
||||||
|
@ -30,15 +33,15 @@ to stdout
|
||||||
to a file.
|
to a file.
|
||||||
|
|
||||||
```
|
```
|
||||||
usage: tox_savefile.py [-h]
|
usage: tox_profile.py [-h]
|
||||||
[--command info|decrypt|nodes|edit]
|
[--command info|decrypt|nodes|edit]
|
||||||
[--info info|repr|yaml|json|pprint|nmap_udp|nmap_tcp]
|
[--info info|repr|yaml|json|pprint|nmap_dht|nmap_relay]
|
||||||
[--indent INDENT]
|
[--indent INDENT]
|
||||||
[--nodes select_tcp|select_udp|select_version|nmap_tcp|nmap_udp|download|check]
|
[--nodes select_tcp|select_udp|select_version|nmap_tcp|nmap_udp|download|check|clean]
|
||||||
[--download_nodes_url DOWNLOAD_NODES_URL]
|
[--download_nodes_url DOWNLOAD_NODES_URL]
|
||||||
[--edit help|section,num,key,val]
|
[--edit help|section,num,key,val]
|
||||||
[--output OUTPUT]
|
[--output OUTPUT]
|
||||||
profile
|
profile
|
||||||
```
|
```
|
||||||
Positional arguments:
|
Positional arguments:
|
||||||
```
|
```
|
||||||
|
@ -50,7 +53,7 @@ Optional arguments:
|
||||||
--command {info,decrypt,nodes,edit}
|
--command {info,decrypt,nodes,edit}
|
||||||
Action command - default: info
|
Action command - default: info
|
||||||
--output OUTPUT Destination for info/decrypt/nodes - can be the same as input
|
--output OUTPUT Destination for info/decrypt/nodes - can be the same as input
|
||||||
--info info|repr|yaml|json|pprint|nmap_udp|nmap_tcp (may require nmap)
|
--info info|repr|yaml|json|pprint|nmap_dht|nmap_relay (may require nmap)
|
||||||
Format for info command
|
Format for info command
|
||||||
--indent INDENT Indent for yaml/json/pprint
|
--indent INDENT Indent for yaml/json/pprint
|
||||||
--nodes select_tcp|select_udp|select_version|nmap_tcp|nmap_udp|download
|
--nodes select_tcp|select_udp|select_version|nmap_tcp|nmap_udp|download
|
||||||
|
@ -66,10 +69,23 @@ Optional arguments:
|
||||||
Choose one of ```{info,repr,yaml,json,pprint,save}```
|
Choose one of ```{info,repr,yaml,json,pprint,save}```
|
||||||
for the format for info command.
|
for the format for info command.
|
||||||
|
|
||||||
Choose one of ```{nmap_udp,nmap_tcp}```
|
Choose one of ```{nmap_dht,nmap_relay,nmap_path}```
|
||||||
to run tests using ```nmap``` for the ```DHT``` and ```TCP_RELAY```
|
to run tests using ```nmap``` for the ```DHT``` and ```TCP_RELAY```
|
||||||
sections of the profile. Reguires ```nmap``` and uses ```sudo```.
|
sections of the profile. Reguires ```nmap``` and uses ```sudo```.
|
||||||
|
|
||||||
|
```
|
||||||
|
--info default='info',
|
||||||
|
choices=[info, save, repr, yaml,json, pprint]
|
||||||
|
with --info=info prints info about the profile to stderr
|
||||||
|
yaml,json, pprint, repr - output format
|
||||||
|
nmap_dht - test DHT nodes with nmap
|
||||||
|
nmap_relay - test TCP_RELAY nodes with nmap
|
||||||
|
nmap_path - test PATH_NODE nodes with nmap
|
||||||
|
--indent for pprint/yaml/json default=2
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
#### Saving a copy
|
#### Saving a copy
|
||||||
|
|
||||||
The code now can generate a saved copy of the profile as it parses the profile.
|
The code now can generate a saved copy of the profile as it parses the profile.
|
||||||
|
@ -83,6 +99,7 @@ decryption).
|
||||||
|
|
||||||
### --command nodes
|
### --command nodes
|
||||||
|
|
||||||
|
|
||||||
Takes a DHTnodes.json file as an argument.
|
Takes a DHTnodes.json file as an argument.
|
||||||
Choose one of ```{select_tcp,select_udp,select_version}```
|
Choose one of ```{select_tcp,select_udp,select_version}```
|
||||||
for ```--nodes``` to select TCP nodes, UDP nodes,
|
for ```--nodes``` to select TCP nodes, UDP nodes,
|
||||||
|
@ -94,6 +111,29 @@ Reguires ```nmap``` and uses ```sudo```.
|
||||||
|
|
||||||
Choose ```download``` to download the nodes from ```--download_nodes_url```
|
Choose ```download``` to download the nodes from ```--download_nodes_url```
|
||||||
|
|
||||||
|
Choose ```check``` to check the downloaded nodes, and the error return
|
||||||
|
is the number of nodes with errors.
|
||||||
|
|
||||||
|
Choose ```clean``` to clean the downloaded nodes, and give
|
||||||
|
```--output``` for the file the nodes ckeaned of errors.
|
||||||
|
|
||||||
|
Check and clean will also try to ping the nodes on the relevant ports,
|
||||||
|
and clean will update the ```status_tcp``, ```status_udp```, and
|
||||||
|
```last_ping``` fields of the nodes.
|
||||||
|
|
||||||
|
--nodes
|
||||||
|
choices=[select_tcp, select_udp, nmap_tcp, select_version, nmap_udp, check, download]
|
||||||
|
select_udp - select udp nodes
|
||||||
|
select_tcp - select tcp nodes
|
||||||
|
nmap_udp - test UDP nodes with nmap
|
||||||
|
nmap_tcp - test TCP nodes with nmap
|
||||||
|
select_version - select nodes that are the latest version
|
||||||
|
download - download nodes from --download_nodes_url
|
||||||
|
check - check nodes from --download_nodes_url
|
||||||
|
clean - check nodes and save them as --output
|
||||||
|
--download_nodes_url https://nodes.tox.chat/json
|
||||||
|
```
|
||||||
|
|
||||||
### --command decrypt
|
### --command decrypt
|
||||||
|
|
||||||
Decrypt a profile, with ```--output``` to a filename.
|
Decrypt a profile, with ```--output``` to a filename.
|
||||||
|
@ -124,6 +164,12 @@ The ```num``` field is to accomodate sections that have lists:
|
||||||
The ```--output``` can be the same as input as the input file is read
|
The ```--output``` can be the same as input as the input file is read
|
||||||
and closed before processing starts.
|
and closed before processing starts.
|
||||||
|
|
||||||
|
```
|
||||||
|
--edit
|
||||||
|
help - print a summary of what fields can be edited
|
||||||
|
section,num,key,val - edit the field section,num,key with val
|
||||||
|
```
|
||||||
|
|
||||||
You can use the ```---edit``` command to synchronize profiles, by
|
You can use the ```---edit``` command to synchronize profiles, by
|
||||||
keeping the keypair and synchronize profiles between different clients:
|
keeping the keypair and synchronize profiles between different clients:
|
||||||
e.g. your could keep your profile from toxic as master, and copy it over
|
e.g. your could keep your profile from toxic as master, and copy it over
|
||||||
|
@ -132,7 +178,8 @@ your qtox/toxygen/TriFa profile while preserving their keypair and NOSPAM:
|
||||||
1. Use ```--command info --info info``` on the target profile to get the
|
1. Use ```--command info --info info``` on the target profile to get the
|
||||||
```Nospam```, ```Public_key``` and ```Private_key``` of the target.
|
```Nospam```, ```Public_key``` and ```Private_key``` of the target.
|
||||||
2. Backup the target and copy the source profile to the target.
|
2. Backup the target and copy the source profile to the target.
|
||||||
3. Edit the target with the values from 1) with:```
|
3. Edit the target with the values from 1) with:
|
||||||
|
```
|
||||||
--command edit --edit NOSPAMKEYS,.,Nospam,hexstr --output target target
|
--command edit --edit NOSPAMKEYS,.,Nospam,hexstr --output target target
|
||||||
--command edit --edit NOSPAMKEYS,.,Public_key,hexstr --output target target
|
--command edit --edit NOSPAMKEYS,.,Public_key,hexstr --output target target
|
||||||
--command edit --edit NOSPAMKEYS,.,Private_key,hexstr --output target target
|
--command edit --edit NOSPAMKEYS,.,Private_key,hexstr --output target target
|
||||||
|
|
|
@ -27,9 +27,9 @@ commands, or the filename of the nodes file for the nodes command.
|
||||||
choices=[info, save, repr, yaml,json, pprint]
|
choices=[info, save, repr, yaml,json, pprint]
|
||||||
with --info=info prints info about the profile to stderr
|
with --info=info prints info about the profile to stderr
|
||||||
yaml,json, pprint, repr - output format
|
yaml,json, pprint, repr - output format
|
||||||
nmap_udp - test DHT nodes with nmap
|
nmap_dht - test DHT nodes with nmap
|
||||||
nmap_tcp - test TCP_RELAY nodes with nmap
|
nmap_relay - test TCP_RELAY nodes with nmap
|
||||||
nmap_onion - test PATH_NODE nodes with nmap
|
nmap_path - test PATH_NODE nodes with nmap
|
||||||
--indent for pprint/yaml/json default=2
|
--indent for pprint/yaml/json default=2
|
||||||
|
|
||||||
--nodes
|
--nodes
|
||||||
|
@ -41,6 +41,7 @@ commands, or the filename of the nodes file for the nodes command.
|
||||||
select_version - select nodes that are the latest version
|
select_version - select nodes that are the latest version
|
||||||
download - download nodes from --download_nodes_url
|
download - download nodes from --download_nodes_url
|
||||||
check - check nodes from --download_nodes_url
|
check - check nodes from --download_nodes_url
|
||||||
|
clean - check nodes and save them as --output
|
||||||
--download_nodes_url https://nodes.tox.chat/json
|
--download_nodes_url https://nodes.tox.chat/json
|
||||||
|
|
||||||
--edit
|
--edit
|
||||||
|
@ -64,6 +65,9 @@ from pprint import pprint
|
||||||
import shutil
|
import shutil
|
||||||
import json
|
import json
|
||||||
|
|
||||||
|
import warnings
|
||||||
|
warnings.filterwarnings('ignore')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# https://pypi.org/project/msgpack/
|
# https://pypi.org/project/msgpack/
|
||||||
import msgpack
|
import msgpack
|
||||||
|
@ -103,7 +107,7 @@ LOG = logging.getLogger('TSF')
|
||||||
# Fix for Windows
|
# Fix for Windows
|
||||||
sDIR = os.environ.get('TMPDIR', '/tmp')
|
sDIR = os.environ.get('TMPDIR', '/tmp')
|
||||||
sTOX_VERSION = "1000002018"
|
sTOX_VERSION = "1000002018"
|
||||||
sVER_WANT = "1000002018"
|
sVER_MIN = "1000002013"
|
||||||
# 3 months
|
# 3 months
|
||||||
iOLD_SECS = 60*60*24*30*3
|
iOLD_SECS = 60*60*24*30*3
|
||||||
|
|
||||||
|
@ -600,9 +604,10 @@ def process_chunk(index, state, oArgs=None):
|
||||||
elif data_type == MESSENGER_STATE_TYPE_TCP_RELAY:
|
elif data_type == MESSENGER_STATE_TYPE_TCP_RELAY:
|
||||||
if length > 0:
|
if length > 0:
|
||||||
lIN = lProcessNodeInfo(state, index, length, result, "TCPnode")
|
lIN = lProcessNodeInfo(state, index, length, result, "TCPnode")
|
||||||
|
LOG.info(f"TYPE_TCP_RELAY {len(lIN)} nodes {length} length")
|
||||||
else:
|
else:
|
||||||
lIN = []
|
lIN = []
|
||||||
LOG.info(f"NO {label}")
|
LOG.warn(f"NO {label} {length} length")
|
||||||
aOUT.update({label: lIN})
|
aOUT.update({label: lIN})
|
||||||
if oArgs.command == 'edit' and section == label:
|
if oArgs.command == 'edit' and section == label:
|
||||||
## TCP_RELAY,.,TCPnode,
|
## TCP_RELAY,.,TCPnode,
|
||||||
|
@ -694,7 +699,7 @@ jq '.|with_entries(select(.key|match("nodes"))).nodes[]|select(.status_tcp)|sele
|
||||||
fi
|
fi
|
||||||
done"""
|
done"""
|
||||||
|
|
||||||
def vBashFileNmapTcp():
|
def sBashFileNmapTcp():
|
||||||
assert bHAVE_JQ, "jq is required for this command"
|
assert bHAVE_JQ, "jq is required for this command"
|
||||||
assert bHAVE_NMAP, "nmap is required for this command"
|
assert bHAVE_NMAP, "nmap is required for this command"
|
||||||
assert bHAVE_BASH, "bash is required for this command"
|
assert bHAVE_BASH, "bash is required for this command"
|
||||||
|
@ -704,6 +709,7 @@ def vBashFileNmapTcp():
|
||||||
with open(sFile, 'wt') as iFd:
|
with open(sFile, 'wt') as iFd:
|
||||||
iFd.write(sNMAP_TCP)
|
iFd.write(sNMAP_TCP)
|
||||||
os.chmod(sFile, 0o0775)
|
os.chmod(sFile, 0o0775)
|
||||||
|
assert os.path.exists(sFile)
|
||||||
return sFile
|
return sFile
|
||||||
|
|
||||||
def vBashFileNmapUdp():
|
def vBashFileNmapUdp():
|
||||||
|
@ -720,21 +726,43 @@ def vBashFileNmapUdp():
|
||||||
replace('tcp_ports','udp_ports').
|
replace('tcp_ports','udp_ports').
|
||||||
replace('status_tcp','status_udp'))
|
replace('status_tcp','status_udp'))
|
||||||
os.chmod(sFile, 0o0775)
|
os.chmod(sFile, 0o0775)
|
||||||
|
assert os.path.exists(sFile)
|
||||||
return sFile
|
return sFile
|
||||||
|
|
||||||
|
def lParseNapOutput(sFile):
|
||||||
|
lRet = []
|
||||||
|
for sLine in open(sFile, 'rt').readlines():
|
||||||
|
if sLine.startswith('Failed to resolve ') or \
|
||||||
|
'Temporary failure in name resolution' in sLine:
|
||||||
|
lRet += [sLine]
|
||||||
|
return lRet
|
||||||
|
|
||||||
sBLURB = """
|
sBLURB = """
|
||||||
I see you have a torrc. You can help the network by running a bootstrap daemon
|
I see you have a torrc. You can help the network by running a bootstrap daemon
|
||||||
as a hidden service, or even using the --tcp_server option of your client.
|
as a hidden service, or even using the --tcp_server option of your client.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def iNodesCheckNodes(json_nodes, oArgs):
|
def lNodesCheckNodes(json_nodes, oArgs, bClean=False):
|
||||||
"""
|
"""
|
||||||
Checking NODES.json
|
Checking NODES.json
|
||||||
"""
|
"""
|
||||||
iWarns = 0
|
lErrs = []
|
||||||
iErrs = 0
|
iErrs = 0
|
||||||
|
nth = 0
|
||||||
|
if bClean: lNew=[]
|
||||||
# assert type(json_nodes) == dict
|
# assert type(json_nodes) == dict
|
||||||
for node in json_nodes:
|
for node in json_nodes:
|
||||||
|
# new fields:
|
||||||
|
if bClean:
|
||||||
|
new_node = {}
|
||||||
|
for key,val in node:
|
||||||
|
if type(val) == bytes:
|
||||||
|
new_node[key] = str(val, 'UTF-8')
|
||||||
|
else:
|
||||||
|
new_node[key] = val
|
||||||
|
if 'onions' not in new_node:
|
||||||
|
new_node['onions'] = []
|
||||||
|
|
||||||
for ipv in ['ipv4','ipv6']:
|
for ipv in ['ipv4','ipv6']:
|
||||||
if not node[ipv] in lNULLS:
|
if not node[ipv] in lNULLS:
|
||||||
LOG.info(f"Checking {node[ipv]}")
|
LOG.info(f"Checking {node[ipv]}")
|
||||||
|
@ -752,11 +780,11 @@ def iNodesCheckNodes(json_nodes, oArgs):
|
||||||
not node['tcp_ports'] and not '.onion' in node['location']:
|
not node['tcp_ports'] and not '.onion' in node['location']:
|
||||||
LOG.warn("No ports to contact the daemon on")
|
LOG.warn("No ports to contact the daemon on")
|
||||||
|
|
||||||
if node["version"] < "1000002013":
|
if node["version"] and node["version"] < "1000002013":
|
||||||
iErrs += 1
|
lErrs += [nth]
|
||||||
LOG.error(f"vulnerable version {node['version']} < 1000002013")
|
LOG.error(f"vulnerable version {node['version']} < 1000002013")
|
||||||
elif node["version"] < sVER_WANT:
|
elif node["version"] and node["version"] < sVER_MIN:
|
||||||
LOG.warn(f"outdated version {node['version']} < {sVER_WANT}")
|
LOG.warn(f"outdated version {node['version']} < {sVER_MIN}")
|
||||||
|
|
||||||
# Put the onion address in the location after the country code
|
# Put the onion address in the location after the country code
|
||||||
if len(node["location"]) not in [2, 65]:
|
if len(node["location"]) not in [2, 65]:
|
||||||
|
@ -786,39 +814,34 @@ def iNodesCheckNodes(json_nodes, oArgs):
|
||||||
else:
|
else:
|
||||||
LOG.warn(f"Found an onion that resolves to {s}")
|
LOG.warn(f"Found an onion that resolves to {s}")
|
||||||
|
|
||||||
if node['last_ping'] == 0:
|
if node['last_ping'] and time.time() - node['last_ping'] > iOLD_SECS:
|
||||||
iErrs += 1
|
LOG.debug(f"node has not been pinged in more than 3 months")
|
||||||
LOG.error(f"node has never been pinged")
|
|
||||||
elif time.time() - node['last_ping'] > iOLD_SECS:
|
|
||||||
LOG.error(f"node has not been pinged in more than 3 months")
|
|
||||||
|
|
||||||
# suggestions YMMV
|
# suggestions YMMV
|
||||||
if str(node['port']).startswith('3344'):
|
|
||||||
LOG.debug(f"Maybe run on a non-standard port to resist blocking {node['port']}")
|
|
||||||
|
|
||||||
if node['tcp_ports']:
|
if len(node['maintainer']) > 75 and len(node['motd']) < 75:
|
||||||
for port in node['tcp_ports']:
|
pass
|
||||||
if str(port).startswith('3344') or port in [33445, 3389]:
|
# look for onion LOG.debug(f"Maybe put a ToxID: in motd so people can contact you.")
|
||||||
LOG.debug(f"Maybe run tcp_ports on a non-standard port to resist blocking: {node['port']}")
|
|
||||||
|
|
||||||
if len(node['maintainer']) < 75 and len(node['motd']) < 75:
|
if bClean and not nth in lErrs:
|
||||||
LOG.debug(f"Maybe add a ToxID: in the motd so people can contact you.")
|
lNew+=[new_node]
|
||||||
elif len(node['maintainer']) > 75 and len(node['motd']) < 75:
|
nth += 1
|
||||||
LOG.debug(f"Maybe put the ToxID: in motd so people can contact you.")
|
|
||||||
elif len(node['maintainer']) > 0 and len(node['motd']) < 1:
|
|
||||||
LOG.debug(f"Maybe put a ToxID: in motd so people can contact you.")
|
|
||||||
|
|
||||||
# fixme look for /etc/tor/torrc but it may not be readable
|
# fixme look for /etc/tor/torrc but it may not be readable
|
||||||
if bHAVE_TOR and os.path.exists('/etc/tor/torrc'):
|
if bHAVE_TOR and os.path.exists('/etc/tor/torrc'):
|
||||||
print(sBLURB)
|
print(sBLURB)
|
||||||
return iErrs
|
if bClean:
|
||||||
|
return lNew
|
||||||
|
else:
|
||||||
|
return lErrs
|
||||||
|
|
||||||
def iNodesFileCheck(sProOrNodes):
|
def iNodesFileCheck(sProOrNodes, oArgs, bClean=False):
|
||||||
try:
|
try:
|
||||||
if not os.path.exists(sProOrNodes):
|
if not os.path.exists(sProOrNodes):
|
||||||
raise RuntimeError("iNodesFileCheck file not found " +sProOrNodes)
|
raise RuntimeError("iNodesFileCheck file not found " +sProOrNodes)
|
||||||
with open(sProOrNodes, 'rt') as fl:
|
with open(sProOrNodes, 'rt') as fl:
|
||||||
json_nodes = json.loads(fl.read())['nodes']
|
json_all = json.loads(fl.read())
|
||||||
|
json_nodes = json_all['nodes']
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
LOG.exception(f"{oArgs.command} error reading {sProOrNodes}")
|
LOG.exception(f"{oArgs.command} error reading {sProOrNodes}")
|
||||||
return 1
|
return 1
|
||||||
|
@ -826,7 +849,24 @@ def iNodesFileCheck(sProOrNodes):
|
||||||
LOG.info(f"iNodesFileCheck checking JSON")
|
LOG.info(f"iNodesFileCheck checking JSON")
|
||||||
i = 0
|
i = 0
|
||||||
try:
|
try:
|
||||||
i = iNodesCheckNodes(json_nodes, oArgs)
|
al = lNodesCheckNodes(json_nodes, oArgs, bClean=bClean)
|
||||||
|
if bClean == False:
|
||||||
|
i = len(al)
|
||||||
|
else:
|
||||||
|
now = time.time()
|
||||||
|
aOut = dict(last_scan=json_all['last_scan'],
|
||||||
|
last_refresh=now,
|
||||||
|
nodes=al)
|
||||||
|
sOut = oArgs.output
|
||||||
|
try:
|
||||||
|
LOG.debug(f"iNodesFileClean saving to {sOut}")
|
||||||
|
oStream = open(sOut, 'wt', encoding=sENC)
|
||||||
|
json.dump(aOut, oStream, indent=oArgs.indent)
|
||||||
|
if oStream.write('\n') > 0: i = 0
|
||||||
|
except Exception as e:
|
||||||
|
LOG.exception(f"iNodesFileClean error dumping JSON to {sOUT}")
|
||||||
|
return 3
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
LOG.exception(f"iNodesFileCheck error checking JSON")
|
LOG.exception(f"iNodesFileCheck error checking JSON")
|
||||||
i = -2
|
i = -2
|
||||||
|
@ -858,30 +898,41 @@ def iNodesFileClean(sProOrNodes):
|
||||||
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
def vOsSystemNmapUdp(l, oArgs):
|
def iOsSystemNmapUdp(l, oArgs):
|
||||||
iErrs = 0
|
|
||||||
for elt in aOUT["DHT"]:
|
|
||||||
cmd = f"sudo nmap -Pn -n -sU -p U:{elt['Port']} {elt['Ip']}"
|
|
||||||
iErrs += os.system(cmd +f" >> {oArgs.output} 2>&1")
|
|
||||||
if iErrs:
|
|
||||||
LOG.warn(f"{oArgs.info} {iErrs} ERRORs to {oArgs.output}")
|
|
||||||
print(f"{oArgs.info} {iErrs} ERRORs to {oArgs.output}")
|
|
||||||
else:
|
|
||||||
LOG.info(f"{oArgs.info} NO errors to {oArgs.output}")
|
|
||||||
print(f"{oArgs.info} NO errors to {oArgs.output}")
|
|
||||||
|
|
||||||
def vOsSystemNmapTcp(l, oArgs):
|
|
||||||
iErrs = 0
|
iErrs = 0
|
||||||
for elt in l:
|
for elt in l:
|
||||||
cmd = f"sudo nmap -Pn -n -sT -p T:{elt['Port']} {elt['Ip']}"
|
cmd = f"sudo nmap -Pn -n -sU -p U:{elt['Port']} {elt['Ip']}"
|
||||||
print(f"{oArgs.info} NO errors to {oArgs.output}")
|
LOG.debug(f"{oArgs.info} {cmd} to {oArgs.output}")
|
||||||
iErrs += os.system(cmd +f" >> {oArgs.output} 2>&1")
|
iErrs += os.system(cmd +f" >> {oArgs.output} 2>&1")
|
||||||
if iErrs:
|
if iErrs:
|
||||||
LOG.warn(f"{oArgs.info} {iErrs} ERRORs to {oArgs.output}")
|
LOG.warn(f"{oArgs.info} {iErrs} ERRORs to {oArgs.output}")
|
||||||
print(f"{oArgs.info} {iErrs} ERRORs to {oArgs.output}")
|
else:
|
||||||
else:
|
LOG.info(f"{oArgs.info} NO errors to {oArgs.output}")
|
||||||
LOG.info(f"{oArgs.info} NO errors to {oArgs.output}")
|
lRet = lParseNapOutput(oArgs.output)
|
||||||
print(f"{oArgs.info} NO errors to {oArgs.output}")
|
if lRet:
|
||||||
|
for sLine in lRet:
|
||||||
|
LOG.warn(f"{oArgs.nodes} {sLine}")
|
||||||
|
iErr = len(lRet)
|
||||||
|
iErrs += iErr
|
||||||
|
return iErrs
|
||||||
|
|
||||||
|
def iOsSystemNmapTcp(l, oArgs):
|
||||||
|
iErrs = 0
|
||||||
|
LOG.debug(f"{len(l)} nodes to {oArgs.output}")
|
||||||
|
for elt in l:
|
||||||
|
cmd = f"sudo nmap -Pn -n -sT -p T:{elt['Port']} {elt['Ip']}"
|
||||||
|
LOG.debug(f"iOsSystemNmapTcp {cmd} to {oArgs.output}")
|
||||||
|
iErr += os.system(cmd +f" >> {oArgs.output} 2>&1")
|
||||||
|
if iErr:
|
||||||
|
LOG.warn(f"iOsSystemNmapTcp {iErrs} ERRORs to {oArgs.output}")
|
||||||
|
else:
|
||||||
|
lRet = lParseNapOutput(oArgs.output)
|
||||||
|
if lRet:
|
||||||
|
for sLine in lRet:
|
||||||
|
LOG.warn(f"{oArgs.nodes} {sLine}")
|
||||||
|
iErr = len(lRet)
|
||||||
|
iErrs += iErr
|
||||||
|
return iErrs
|
||||||
|
|
||||||
def vSetupLogging(loglevel=logging.DEBUG):
|
def vSetupLogging(loglevel=logging.DEBUG):
|
||||||
global LOG
|
global LOG
|
||||||
|
@ -915,6 +966,7 @@ def iMain(sProOrNodes, oArgs):
|
||||||
LOG.error(f"decrypting {sProOrNodes} - {e}")
|
LOG.error(f"decrypting {sProOrNodes} - {e}")
|
||||||
sys.exit(1)
|
sys.exit(1)
|
||||||
assert bSAVE
|
assert bSAVE
|
||||||
|
LOG.debug(f"{oArgs.command} {len(bSAVE)} bytes")
|
||||||
|
|
||||||
oStream = None
|
oStream = None
|
||||||
if oArgs.command == 'decrypt':
|
if oArgs.command == 'decrypt':
|
||||||
|
@ -928,8 +980,8 @@ def iMain(sProOrNodes, oArgs):
|
||||||
iRet = -1
|
iRet = -1
|
||||||
ep_sec = str(int(time.time()))
|
ep_sec = str(int(time.time()))
|
||||||
json_head = '{"last_scan":' +ep_sec \
|
json_head = '{"last_scan":' +ep_sec \
|
||||||
+',"last_refresh":' +ep_sec \
|
+',"last_refresh":' +ep_sec \
|
||||||
+',"nodes":['
|
+',"nodes":['
|
||||||
if oArgs.nodes == 'select_tcp':
|
if oArgs.nodes == 'select_tcp':
|
||||||
assert oArgs.output, "--output required for this command"
|
assert oArgs.output, "--output required for this command"
|
||||||
assert bHAVE_JQ, "jq is required for this command"
|
assert bHAVE_JQ, "jq is required for this command"
|
||||||
|
@ -963,40 +1015,65 @@ def iMain(sProOrNodes, oArgs):
|
||||||
assert oArgs.output, "--output required for this command"
|
assert oArgs.output, "--output required for this command"
|
||||||
if not bAreWeConnected():
|
if not bAreWeConnected():
|
||||||
LOG.warn(f"{oArgs.nodes} we are not connected")
|
LOG.warn(f"{oArgs.nodes} we are not connected")
|
||||||
cmd = vBashFileNmapTcp()
|
else:
|
||||||
iRet = os.system(f"bash {cmd} < '{sProOrNodes}'" +f" >'{oArgs.output}'")
|
cmd = sBashFileNmapTcp()
|
||||||
|
cmd = f"bash {cmd} < '{sProOrNodes}' >'{oArgs.output}' 2>&1"
|
||||||
|
LOG.debug(cmd)
|
||||||
|
iRet = os.system(cmd)
|
||||||
|
if iRet == 0:
|
||||||
|
lRet = lParseNapOutput(oArgs.output)
|
||||||
|
if lRet:
|
||||||
|
for sLine in lRet:
|
||||||
|
LOG.warn(f"{oArgs.nodes} {sLine}")
|
||||||
|
iRet = len(lRet)
|
||||||
|
|
||||||
elif oArgs.nodes == 'nmap_udp':
|
elif oArgs.nodes == 'nmap_udp':
|
||||||
assert oArgs.output, "--output required for this command"
|
assert oArgs.output, "--output required for this command"
|
||||||
if not bAreWeConnected():
|
if not bAreWeConnected():
|
||||||
LOG.warn(f"{oArgs.nodes} we are not connected")
|
LOG.warn(f"{oArgs.nodes} we are not connected")
|
||||||
|
elif bHAVE_TOR:
|
||||||
|
LOG.warn(f"{oArgs.nodes} this wont work behind tor")
|
||||||
cmd = vBashFileNmapUdp()
|
cmd = vBashFileNmapUdp()
|
||||||
iRet = os.system(f"bash {cmd} < '{sProOrNodes}'" +f" >'{oArgs.output}'")
|
cmd = f"bash {cmd} < '{sProOrNodes}'" +f" >'{oArgs.output}' 2>&1"
|
||||||
|
LOG.debug(cmd)
|
||||||
|
iRet = os.system(cmd)
|
||||||
|
if iRet == 0:
|
||||||
|
lRet = lParseNapOutput(oArgs.output)
|
||||||
|
if lRet:
|
||||||
|
for sLine in lRet:
|
||||||
|
LOG.warn(f"{oArgs.nodes} {sLine}")
|
||||||
|
iRet = len(lRet)
|
||||||
|
|
||||||
elif oArgs.nodes == 'download' and download_url:
|
elif oArgs.nodes == 'download' and download_url:
|
||||||
if not bAreWeConnected():
|
if not bAreWeConnected():
|
||||||
LOG.warn(f"{oArgs.nodes} we are not connected")
|
LOG.warn(f"{oArgs.nodes} we are not connected")
|
||||||
url = oArgs.download_nodes_url
|
url = oArgs.download_nodes_url
|
||||||
b = download_url(url)
|
b = download_url(url)
|
||||||
if not bSAVE:
|
if not b:
|
||||||
LOG.warn("failed downloading list of nodes")
|
LOG.warn("failed downloading list of nodes")
|
||||||
iRet = -1
|
iRet = -1
|
||||||
else:
|
else:
|
||||||
if oArgs.output:
|
if oArgs.output:
|
||||||
oStream = open(oArgs.output, 'rb')
|
oStream = open(oArgs.output, 'wb')
|
||||||
oStream.write(b)
|
oStream.write(b)
|
||||||
else:
|
else:
|
||||||
oStream = sys.stdout
|
oStream = sys.stdout
|
||||||
oStream.write(str(b, sENC))
|
oStream.write(str(b, sENC))
|
||||||
iRet = -1
|
iRet = 0
|
||||||
LOG.info(f"downloaded list of nodes to {oStream}")
|
LOG.info(f"downloaded list of nodes to {oStream}")
|
||||||
|
|
||||||
elif oArgs.nodes == 'check':
|
elif oArgs.nodes == 'check':
|
||||||
i = iNodesFileCheck(sProOrNodes)
|
i = iNodesFileCheck(sProOrNodes, oArgs, bClean=False)
|
||||||
return i
|
iRet = i
|
||||||
|
|
||||||
|
elif oArgs.nodes == 'clean':
|
||||||
|
assert oArgs.output, "--output required for this command"
|
||||||
|
i = iNodesFileCheck(sProOrNodes, oArgs, bClean=True)
|
||||||
|
iRet = i
|
||||||
|
|
||||||
if iRet > 0:
|
if iRet > 0:
|
||||||
LOG.warn(f"{oArgs.nodes} iRet={iRet} to {oArgs.output}")
|
LOG.warn(f"{oArgs.nodes} iRet={iRet} to {oArgs.output}")
|
||||||
|
|
||||||
elif iRet == 0:
|
elif iRet == 0:
|
||||||
LOG.info(f"{oArgs.nodes} iRet={iRet} to {oArgs.output}")
|
LOG.info(f"{oArgs.nodes} iRet={iRet} to {oArgs.output}")
|
||||||
|
|
||||||
|
@ -1017,6 +1094,7 @@ def iMain(sProOrNodes, oArgs):
|
||||||
process_chunk(len(bOUT), bSAVE, oArgs)
|
process_chunk(len(bOUT), bSAVE, oArgs)
|
||||||
if not bOUT:
|
if not bOUT:
|
||||||
LOG.error(f"{oArgs.command} NO bOUT results")
|
LOG.error(f"{oArgs.command} NO bOUT results")
|
||||||
|
iRet = 1
|
||||||
else:
|
else:
|
||||||
oStream = None
|
oStream = None
|
||||||
LOG.debug(f"command={oArgs.command} len bOUT={len(bOUT)} results")
|
LOG.debug(f"command={oArgs.command} len bOUT={len(bOUT)} results")
|
||||||
|
@ -1028,42 +1106,83 @@ def iMain(sProOrNodes, oArgs):
|
||||||
LOG.info(f"{oArgs.info}ed iRet={iRet} to {oArgs.output}")
|
LOG.info(f"{oArgs.info}ed iRet={iRet} to {oArgs.output}")
|
||||||
elif oArgs.info == 'info':
|
elif oArgs.info == 'info':
|
||||||
pass
|
pass
|
||||||
elif oArgs.info == 'yaml' and yaml:
|
iRet = 0
|
||||||
LOG.debug(f"{oArgs.command} saving to {oArgs.output}")
|
elif oArgs.info == 'yaml':
|
||||||
oStream = open(oArgs.output, 'wt', encoding=sENC)
|
if not yaml:
|
||||||
yaml.dump(aOUT, stream=oStream, indent=oArgs.indent)
|
LOG.warn(f"{oArgs.command} no yaml support")
|
||||||
if oStream.write('\n') > 0: iRet = 0
|
iRet = -1
|
||||||
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
else:
|
||||||
elif oArgs.info == 'json' and json:
|
LOG.debug(f"{oArgs.command} saving to {oArgs.output}")
|
||||||
LOG.debug(f"{oArgs.command} saving to {oArgs.output}")
|
oStream = open(oArgs.output, 'wt', encoding=sENC)
|
||||||
oStream = open(oArgs.output, 'wt', encoding=sENC)
|
try:
|
||||||
json.dump(aOUT, oStream, indent=oArgs.indent)
|
assert aOUT
|
||||||
if oStream.write('\n') > 0: iRet = 0
|
yaml.dump(aOUT, stream=oStream, indent=oArgs.indent)
|
||||||
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
except Exception as e:
|
||||||
|
LOG.warn(f'WARN: {e}')
|
||||||
|
else:
|
||||||
|
oStream.write('\n')
|
||||||
|
iRet = 0
|
||||||
|
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
||||||
|
|
||||||
|
elif oArgs.info == 'json':
|
||||||
|
if not yaml:
|
||||||
|
LOG.warn(f"{oArgs.command} no json support")
|
||||||
|
iRet = -1
|
||||||
|
else:
|
||||||
|
LOG.debug(f"{oArgs.command} saving to {oArgs.output}")
|
||||||
|
oStream = open(oArgs.output, 'wt', encoding=sENC)
|
||||||
|
try:
|
||||||
|
json.dump(aOUT, oStream, indent=oArgs.indent, skipkeys=True)
|
||||||
|
except:
|
||||||
|
LOG.warn("There are somtimes problems with the json info dump of bytes keys: ```TypeError: Object of type bytes is not JSON serializable```")
|
||||||
|
oStream.write('\n') > 0
|
||||||
|
iRet = 0
|
||||||
|
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
||||||
|
|
||||||
elif oArgs.info == 'repr':
|
elif oArgs.info == 'repr':
|
||||||
LOG.debug(f"{oArgs.command} saving to {oArgs.output}")
|
LOG.debug(f"{oArgs.command} saving to {oArgs.output}")
|
||||||
oStream = open(oArgs.output, 'wt', encoding=sENC)
|
oStream = open(oArgs.output, 'wt', encoding=sENC)
|
||||||
if oStream.write(repr(bOUT)) > 0: iRet = 0
|
if oStream.write(repr(bOUT)) > 0: iRet = 0
|
||||||
if oStream.write('\n') > 0: iRet = 0
|
if oStream.write('\n') > 0: iRet = 0
|
||||||
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
||||||
|
|
||||||
elif oArgs.info == 'pprint':
|
elif oArgs.info == 'pprint':
|
||||||
LOG.debug(f"{oArgs.command} saving to {oArgs.output}")
|
LOG.debug(f"{oArgs.command} saving to {oArgs.output}")
|
||||||
oStream = open(oArgs.output, 'wt', encoding=sENC)
|
oStream = open(oArgs.output, 'wt', encoding=sENC)
|
||||||
pprint(aOUT, stream=oStream, indent=oArgs.indent, width=80)
|
pprint(aOUT, stream=oStream, indent=oArgs.indent, width=80)
|
||||||
iRet = 0
|
iRet = 0
|
||||||
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
LOG.info(f"{oArgs.info}ing iRet={iRet} to {oArgs.output}")
|
||||||
elif oArgs.info == 'nmap_tcp' and bHAVE_NMAP:
|
|
||||||
|
elif oArgs.info == 'nmap_relay':
|
||||||
|
assert bHAVE_NMAP, "nmap is required for this command"
|
||||||
assert oArgs.output, "--output required for this command"
|
assert oArgs.output, "--output required for this command"
|
||||||
vOsSystemNmapTcp(aOUT["TCP_RELAY"], oArgs)
|
if aOUT["TCP_RELAY"]:
|
||||||
elif oArgs.info == 'nmap_udp' and bHAVE_NMAP:
|
iRet = iOsSystemNmapTcp(aOUT["TCP_RELAY"], oArgs)
|
||||||
|
else:
|
||||||
|
LOG.warn(f"{oArgs.info} no TCP_RELAY")
|
||||||
|
iRet = 0
|
||||||
|
|
||||||
|
elif oArgs.info == 'nmap_dht':
|
||||||
|
assert bHAVE_NMAP, "nmap is required for this command"
|
||||||
assert oArgs.output, "--output required for this command"
|
assert oArgs.output, "--output required for this command"
|
||||||
vOsSystemNmapUdp(aOUT["DHT"], oArgs)
|
if aOUT["DHT"]:
|
||||||
elif oArgs.info == 'nmap_onion' and bHAVE_NMAP:
|
iRet = iOsSystemNmapUdp(aOUT["DHT"], oArgs)
|
||||||
|
else:
|
||||||
|
LOG.warn(f"{oArgs.info} no DHT")
|
||||||
|
iRet = 0
|
||||||
|
|
||||||
|
elif oArgs.info == 'nmap_path':
|
||||||
|
assert bHAVE_NMAP, "nmap is required for this command"
|
||||||
assert oArgs.output, "--output required for this command"
|
assert oArgs.output, "--output required for this command"
|
||||||
vOsSystemNmapUdp(aOUT["PATH_NODE"], oArgs)
|
if aOUT["PATH_NODE"]:
|
||||||
|
iRet = iOsSystemNmapUdp(aOUT["PATH_NODE"], oArgs)
|
||||||
|
else:
|
||||||
|
LOG.warn(f"{oArgs.info} no PATH_NODE")
|
||||||
|
iRet = 0
|
||||||
|
|
||||||
if oStream and oStream != sys.stdout and oStream != sys.stderr:
|
if oStream and oStream != sys.stdout and oStream != sys.stderr:
|
||||||
oStream.close()
|
oStream.close()
|
||||||
|
return iRet
|
||||||
|
|
||||||
def oMainArgparser(_=None):
|
def oMainArgparser(_=None):
|
||||||
if not os.path.exists('/proc/sys/net/ipv6'):
|
if not os.path.exists('/proc/sys/net/ipv6'):
|
||||||
|
@ -1087,11 +1206,12 @@ def oMainArgparser(_=None):
|
||||||
parser.add_argument('--indent', type=int, default=2,
|
parser.add_argument('--indent', type=int, default=2,
|
||||||
help='Indent for yaml/json/pprint')
|
help='Indent for yaml/json/pprint')
|
||||||
choices=['info', 'save', 'repr', 'yaml','json', 'pprint']
|
choices=['info', 'save', 'repr', 'yaml','json', 'pprint']
|
||||||
if bHAVE_NMAP: choices += ['nmap_tcp', 'nmap_udp', 'nmap_onion']
|
if bHAVE_NMAP:
|
||||||
|
choices += ['nmap_relay', 'nmap_dht', 'nmap_path']
|
||||||
parser.add_argument('--info', type=str, default='info',
|
parser.add_argument('--info', type=str, default='info',
|
||||||
choices=choices,
|
choices=choices,
|
||||||
help='Format for info command')
|
help='Format for info command')
|
||||||
choices = ['check']
|
choices = ['check', 'clean']
|
||||||
if bHAVE_JQ:
|
if bHAVE_JQ:
|
||||||
choices += ['select_tcp', 'select_udp', 'select_version']
|
choices += ['select_tcp', 'select_udp', 'select_version']
|
||||||
if bHAVE_NMAP: choices += ['nmap_tcp', 'nmap_udp']
|
if bHAVE_NMAP: choices += ['nmap_tcp', 'nmap_udp']
|
|
@ -1,232 +0,0 @@
|
||||||
#!/bin/sh
|
|
||||||
# -*- mode: sh; fill-column: 75; tab-width: 8; coding: utf-8-unix -*-
|
|
||||||
|
|
||||||
# tox_savefile.py has a lot of features so it needs test coverage
|
|
||||||
|
|
||||||
PREFIX=/mnt/o/var/local/src
|
|
||||||
EXE=python3.sh
|
|
||||||
WRAPPER=$PREFIX/toxygen_wrapper
|
|
||||||
|
|
||||||
[ -f /usr/local/bin/usr_local_tput.bash ] && \
|
|
||||||
. /usr/local/bin/usr_local_tput.bash || {
|
|
||||||
DEBUG() { echo DEBUG $* ; }
|
|
||||||
INFO() { echo INFO $* ; }
|
|
||||||
WARN() { echo WARN $* ; }
|
|
||||||
ERROR() { echo ERROR $* ; }
|
|
||||||
}
|
|
||||||
|
|
||||||
# set -- -e
|
|
||||||
target=$PREFIX/tox_profile/tox_savefile.py
|
|
||||||
[ -s $target ] || exit 1
|
|
||||||
|
|
||||||
tox=$HOME/.config/tox/toxic_profile.tox
|
|
||||||
[ -s $tox ] || exit 2
|
|
||||||
|
|
||||||
[ -d $WRAPPER ] || {
|
|
||||||
ERROR wrapper is required https://git.plastiras.org/emdee/toxygen_wrapper
|
|
||||||
exit 3
|
|
||||||
}
|
|
||||||
export PYTHONPATH=$WRAPPER
|
|
||||||
|
|
||||||
json=$HOME/.config/tox/DHTnodes.json
|
|
||||||
[ -s $json ] || exit 4
|
|
||||||
|
|
||||||
which jq > /dev/null && HAVE_JQ=1 || HAVE_JQ=0
|
|
||||||
which nmap > /dev/null && HAVE_NMAP=1 || HAVE_NMAP=0
|
|
||||||
|
|
||||||
sudo rm -f /tmp/toxic_profile.* /tmp/toxic_nodes.*
|
|
||||||
|
|
||||||
test_jq () {
|
|
||||||
[ $# -eq 3 ] || {
|
|
||||||
ERROR test_jq '#' "$@"
|
|
||||||
return 3
|
|
||||||
}
|
|
||||||
in=$1
|
|
||||||
out=$2
|
|
||||||
err=$3
|
|
||||||
[ -s $in ] || {
|
|
||||||
ERROR $i test_jq null $in
|
|
||||||
return 4
|
|
||||||
}
|
|
||||||
jq . < $in >$out 2>$err || {
|
|
||||||
ERROR $i test_jq $json
|
|
||||||
return 5
|
|
||||||
}
|
|
||||||
grep error: $err && {
|
|
||||||
ERROR $i test_jq $json
|
|
||||||
return 6
|
|
||||||
}
|
|
||||||
[ -s $out ] || {
|
|
||||||
ERROR $i null $out
|
|
||||||
return 7
|
|
||||||
}
|
|
||||||
[ -s $err ] || rm -f $err
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
i=0
|
|
||||||
[ "$HAVE_JQ" = 0 ] || \
|
|
||||||
test_jq $json /tmp/toxic_nodes.json /tmp/toxic_nodes.err || exit ${i}$?
|
|
||||||
[ -f /tmp/toxic_nodes.json ] || cp -p $json /tmp/toxic_nodes.json
|
|
||||||
json=/tmp/toxic_nodes.json
|
|
||||||
|
|
||||||
i=1
|
|
||||||
# required password
|
|
||||||
INFO $i decrypt /tmp/toxic_profile.bin
|
|
||||||
$EXE $target --command decrypt --output /tmp/toxic_profile.bin $tox || exit ${i}1
|
|
||||||
[ -s /tmp/toxic_profile.bin ] || exit ${i}2
|
|
||||||
|
|
||||||
tox=/tmp/toxic_profile.bin
|
|
||||||
INFO $i info $tox
|
|
||||||
$EXE $target --command info --info info $tox 2>/tmp/toxic_profile.info || {
|
|
||||||
ERROR $i $EXE $target --command info --info info $tox
|
|
||||||
exit ${i}3
|
|
||||||
}
|
|
||||||
[ -s /tmp/toxic_profile.info ] || exit ${i}4
|
|
||||||
|
|
||||||
INFO $i /tmp/toxic_profile.save
|
|
||||||
$EXE $target --command info --info save --output /tmp/toxic_profile.save $tox 2>/dev/null || exit ${i}5
|
|
||||||
[ -s /tmp/toxic_profile.save ] || exit ${i}6
|
|
||||||
|
|
||||||
i=2
|
|
||||||
for the_tox in $tox /tmp/toxic_profile.save ; do
|
|
||||||
DBUG $i $the_tox
|
|
||||||
the_base=`echo $the_tox | sed -e 's/.save$//' -e 's/.tox$//'`
|
|
||||||
for elt in json yaml pprint repr ; do
|
|
||||||
INFO $i $the_base.$elt
|
|
||||||
DBUG $EXE $target \
|
|
||||||
--command info --info $elt \
|
|
||||||
--output $the_base.$elt $the_tox '2>'$the_base.$elt.err
|
|
||||||
$EXE $target --command info --info $elt \
|
|
||||||
--output $the_base.$elt $the_tox 2>$the_base.$nmap.err || exit ${i}0
|
|
||||||
[ -s $the_base.$elt ] || exit ${i}1
|
|
||||||
done
|
|
||||||
|
|
||||||
$EXE $target --command edit --edit help $the_tox 2>/dev/null || exit ${i}2
|
|
||||||
|
|
||||||
# edit the status message
|
|
||||||
INFO $i $the_base.Status_message 'STATUSMESSAGE,.,Status_message,Toxxed on Toxic'
|
|
||||||
$EXE $target --command edit --edit 'STATUSMESSAGE,.,Status_message,Toxxed on Toxic' \
|
|
||||||
--output $the_base.Status_message.tox $the_tox 2>&1|grep EDIT || exit ${i}3
|
|
||||||
[ -s $the_base.Status_message.tox ] || exit ${i}3
|
|
||||||
$EXE $target --command info $the_base.Status_message.tox 2>&1|grep Toxxed || exit ${i}4
|
|
||||||
|
|
||||||
# edit the nick_name
|
|
||||||
INFO $i $the_base.Nick_name 'NAME,.,Nick_name,FooBar'
|
|
||||||
$EXE $target --command edit --edit 'NAME,.,Nick_name,FooBar' \
|
|
||||||
--output $the_base.Nick_name.tox $the_tox 2>&1|grep EDIT || exit ${i}5
|
|
||||||
[ -s $the_base.Nick_name.tox ] || exit ${i}5
|
|
||||||
$EXE $target --command info $the_base.Nick_name.tox 2>&1|grep FooBar || exit ${i}6
|
|
||||||
|
|
||||||
# set the DHTnodes to empty
|
|
||||||
INFO $i $the_base.noDHT 'DHT,.,DHTnode,'
|
|
||||||
$EXE $target --command edit --edit 'DHT,.,DHTnode,' \
|
|
||||||
--output $the_base.noDHT.tox $the_tox 2>&1|grep EDIT || exit ${i}7
|
|
||||||
[ -s $the_base.noDHT.tox ] || exit ${i}7
|
|
||||||
$EXE $target --command info $the_base.noDHT.tox 2>&1|grep 'NO DHT' || exit ${i}8
|
|
||||||
|
|
||||||
done
|
|
||||||
|
|
||||||
i=3
|
|
||||||
[ "$HAVE_JQ" = 0 ] || \
|
|
||||||
for the_json in $json ; do
|
|
||||||
DBUG $i $the_json
|
|
||||||
the_base=`echo $the_json | sed -e 's/.json$//' -e 's/.tox$//'`
|
|
||||||
for nmap in select_tcp select_udp select_version ; do
|
|
||||||
$EXE $target --command nodes --nodes $nmap \
|
|
||||||
--output $the_base.$nmap.json $the_json || {
|
|
||||||
WARN $i $the_json $nmap ${i}1
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
[ -s $the_base.$nmap.json ] || {
|
|
||||||
WARN $i $the_json $nmap ${i}2
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
[ $nmap = select_tcp ] && \
|
|
||||||
grep '"status_tcp": false' $the_base.$nmap.json && {
|
|
||||||
WARN $i $the_json $nmap ${i}3
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
[ $nmap = select_udp ] && \
|
|
||||||
grep '"status_udp": false' $the_base.$nmap.json && {
|
|
||||||
WARN $i $the_json $nmap ${i}4
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
test_jq $the_base.$nmap.json $the_base.$nmap.json.out /tmp/toxic_nodes.err || {
|
|
||||||
retval=$?
|
|
||||||
WARN $i $the_base.$nmap.json 3$?
|
|
||||||
}
|
|
||||||
INFO $i $the_base.$nmap
|
|
||||||
done
|
|
||||||
done
|
|
||||||
|
|
||||||
ls -l /tmp/toxic_profile.* /tmp/toxic_nodes.*
|
|
||||||
|
|
||||||
# DEBUG=0 /usr/local/bin/proxy_ping_test.bash tor || exit 0
|
|
||||||
ip route | grep ^def || exit 0
|
|
||||||
|
|
||||||
i=4
|
|
||||||
the_tox=$tox
|
|
||||||
[ "$HAVE_JQ" = 0 ] || \
|
|
||||||
[ "$HAVE_NMAP" = 0 ] || \
|
|
||||||
for the_tox in $tox /tmp/toxic_profile.save ; do
|
|
||||||
DBUG $i $the_tox
|
|
||||||
the_base=`echo $the_tox | sed -e 's/.save$//' -e 's/.tox$//'`
|
|
||||||
for nmap in nmap_tcp nmap_udp nmap_onion ; do
|
|
||||||
# [ $nmap = select_tcp ] && continue
|
|
||||||
# [ $nmap = select_udp ] && continue
|
|
||||||
INFO $i $the_base.$nmap
|
|
||||||
$EXE $target --command info --info $nmap \
|
|
||||||
--output $the_base.$nmap.out $the_tox 2>$the_base.$nmap.err || {
|
|
||||||
# select_tcp may be empty and jq errors
|
|
||||||
# exit ${i}1
|
|
||||||
WARN $i $the_base.$nmap.err
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
[ -s $the_base.$nmap.out ] || {
|
|
||||||
ERROR $i $the_base.$nmap.out
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
done
|
|
||||||
done
|
|
||||||
|
|
||||||
i=5
|
|
||||||
[ "$HAVE_JQ" = 0 ] || \
|
|
||||||
for the_json in $json ; do
|
|
||||||
DBUG $i $the_json
|
|
||||||
the_base=`echo $the_json | sed -e 's/.save$//' -e 's/.json$//'`
|
|
||||||
for nmap in nmap_tcp nmap_udp ; do
|
|
||||||
INFO $i $the_base.$nmap
|
|
||||||
$EXE $target --command nodes --nodes $nmap \
|
|
||||||
--output $the_base.$nmap $the_json 2>$the_base.$nmap.err || {
|
|
||||||
WARN $i $the_json $nmap ${i}1
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
[ -s $the_base.$nmap ] || {
|
|
||||||
ERROR $i $the_json $nmap ${i}2
|
|
||||||
exit ${i}2
|
|
||||||
}
|
|
||||||
done
|
|
||||||
done
|
|
||||||
|
|
||||||
i=6
|
|
||||||
DBUG $i
|
|
||||||
$EXE $target --command nodes --nodes download \
|
|
||||||
--output /tmp/toxic_nodes.new $json || {
|
|
||||||
ERROR $i $EXE $target --command nodes --nodes download $json
|
|
||||||
exit ${i}1
|
|
||||||
}
|
|
||||||
[ -s /tmp/toxic_nodes.new ] || exit ${i}4
|
|
||||||
json=/tmp/toxic_nodes.new
|
|
||||||
[ "$HAVE_JQ" = 0 ] || \
|
|
||||||
jq . < $json >/tmp/toxic_nodes.new.json 2>>/tmp/toxic_nodes.new.err || {
|
|
||||||
ERROR $i jq $json
|
|
||||||
exit ${i}2
|
|
||||||
}
|
|
||||||
[ "$HAVE_JQ" = 0 ] || \
|
|
||||||
grep error: /tmp/toxic_nodes.new.err && {
|
|
||||||
ERROR $i jq $json
|
|
||||||
exit ${i}3
|
|
||||||
}
|
|
||||||
|
|
||||||
exit 0
|
|
Loading…
Reference in a new issue