This commit is contained in:
Agie Ashwood 2024-07-28 11:21:15 +00:00
commit 9673b359d3
136 changed files with 4661 additions and 0 deletions

20
.gitignore vendored Normal file
View File

@ -0,0 +1,20 @@
lib/
lib64/
share/
src/__pycache__/
src/webui/build/
src/webui/htmx-extensions/
src/webui/res/js/node_modules/
tui.rst
lib64
src/Components/__pycache__/
bin/
src/daisy/
src/catch/
src/Cryptography/__pycache__/
src/webui/__pycache__
src/logs
src/Packets/__pycache__
src/Transmission/__pycache__/
src/Filters/Protocols/__pycache__
src/Bubble/__pycache__

BIN
bubble.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

9
builddocs Executable file
View File

@ -0,0 +1,9 @@
rm -rf docs/*
sphinx-build -M markdown src docs
mv docs/markdown docs.tmp
rm -rf docs/doctrees
mv docs.tmp/* docs
rm -rf docs.tmp
mv docs/index.md docs/readme.md
sed -i '1s;^;![PierMesh logo](https://git.utopic.work/PierMesh/piermesh/raw/branch/main/piermeshicon.png)\n\n;' docs/readme.md
sed -i '1s;^;![Daisy logo](https://git.utopic.work/PierMesh/piermesh/raw/branch/main/imgs/daisydisplay.png)\n\n;' docs/Components/daisy.md

BIN
catch.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

1
docs/Bubble/config.md Normal file
View File

@ -0,0 +1 @@
# Configuration utilities

3
docs/Bubble/map.md Normal file
View File

@ -0,0 +1,3 @@
# Network map representation
### *class* Bubble.map.Network(icg=None, file=None)

3
docs/Bubble/router.md Normal file
View File

@ -0,0 +1,3 @@
# Data routing logic and data
### *class* Bubble.router.Router(cLog, nfpath='server.info')

92
docs/Components/daisy.md Normal file
View File

@ -0,0 +1,92 @@
![Daisy logo](https://git.utopic.work/PierMesh/piermesh/raw/branch/main/imgs/daisydisplay.png)
# Schemaless binary database
### *class* Components.daisy.Daisy(filepath: str, templates: dict = {}, template: bool = False, prefillDict: bool = False)
Base class for Daisy data representation
[🔗 Source](https://git.utopic.work/PierMesh/piermesh/src/branch/main/Components/daisy.py)
#### get()
Get record dictionary from memory
* **Returns:**
**self.msg**
* **Return type:**
dict
#### read(decrypt: bool = False, decryptKey=False)
Read record from disk to memory
* **Parameters:**
* **decrypt** (*bool*) Whether to decrypt record
* **decryptKey** Key to decrypt record
#### sublist()
Lists contents of directory if object is a directory, otherwise return None
#### write(override=False, encrypt: bool = False, encryptKey=None, recur: bool = False)
Write record to disk
* **Parameters:**
* **override** Either false or a dictionary of values to set on the record
* **encrypt** (*bool*) Whether to encrypt the record (TODO)
* **encryptKey** Key to encrypt record with, or None if not set
* **recur** (*bool*) Whether to recursively handle keys
### *class* Components.daisy.Cache(filepaths=None, cacheFile=None, path: str = 'daisy', walk: bool = False, isCatch: bool = False)
In memory collection of Daisy records
#### create(path: str, data: dict)
Create new record
* **Parameters:**
* **path** (*str*) Path to create record at
* **data** (*dict*) Data to populate record with
#### get(path: str)
Get record at path, else return False
path: str
: Path of record
#### refresh()
Reload from disk to memory
#### search(keydict: dict, strict: bool = True)
Search cache for record for records with values
keydict: dict
: Values to search for
strict: bool
: Whether to require values match
### *class* Components.daisy.Catch(path: str = 'catch', filepaths=None, catchFile=None, walk: bool = False)
Sub class of Cache for handling catchs
![image](https://git.utopic.work/PierMesh/piermesh/raw/branch/main/imgs/catchdisplay.png)
#### get(head: str, tail: str, fins=None)
Get catch by pieces
* **Parameters:**
* **head** (*str*) First part of catch (maximum: 4 characters)
* **tail** (*str*) Second part of catch (maximum: 16 characters)
* **fins** List of (maximum 8 characters) strings at the end of the catch oe None if none
#### sget(path: str)
Call Caches get to get record

17
docs/Components/hopper.md Normal file
View File

@ -0,0 +1,17 @@
# Small internet interop utilities
### Components.hopper.get(url: str, params=None)
http/s get request
* **Parameters:**
* **url** (*str*)
* **params** Requests (library) parameters
### Components.hopper.post(url: str, params=None)
http/s post request
* **Parameters:**
* **url** (*str*)
* **params** Requests (library) parameters

View File

@ -0,0 +1,5 @@
Diffie hellman ephemeral
Fernet based encryption
==========================
### *class* Cryptography.DHEFern.DHEFern(cache, nodeNickname, cLog)

17
docs/Filters/base.md Normal file
View File

@ -0,0 +1,17 @@
Primary filtering functionality
Dispatches to Protocols
===============================
### *class* Filters.base.Filter(cache, onodeID, todo, cLog)
#### *async* protoRoute(completeMessage)
Shorthand reference
### *class* Filters.Protocols.bubble.filter(completeMessage, recipient, recipientNode, onodeID, todo)
### *class* Filters.Protocols.catch.filter(completeMessage, recipient, recipientNode, todo)
### *class* Filters.Protocols.cryptography.filter(completeMessage, recipientNode, todo)
### *class* Filters.Protocols.map.filter(completeMessage, todo)

View File

@ -0,0 +1,3 @@
# Header packet: Metadata packet
### *class* Packets.HeaderPacket.Header(packetsID, packetCount, sender, senderDisplayName, recipient, recipientNode, json=True, fname=False, subpacket=False, wantFullResponse=False, mimeType=-1, protocol=None, packetsClass=0)

3
docs/Packets/Packet.md Normal file
View File

@ -0,0 +1,3 @@
# Base packet
### *class* Packets.Packet.Packet(data, packetsID=False, packetNumber=False, packetCount=1, packetsClass=-1)

5
docs/Packets/Packets.md Normal file
View File

@ -0,0 +1,5 @@
Packets representation for full
message
===============================
### *class* Packets.Packets.Packets(bytesObject, sender, senderDisplayName, recipient, recipientNode, dataSize=128, wantFullResponse=False, packetsClass=None)

View File

@ -0,0 +1,5 @@
SinglePacket: Singular packet
for very low data applications
===============================
### *class* Packets.SinglePacket.SinglePacket(data, packetsID, packetsClass=None, cache=None)

View File

@ -0,0 +1,3 @@
SubPacket for handling
individual packets of submessages
=================================

View File

@ -0,0 +1,3 @@
SubPackets for handling
full submessages
=======================

View File

@ -0,0 +1,3 @@
# Layer 0 data transmission
### *class* Transmission.transmission.Transmitter(device, filter, onodeID, cache, catch, cryptographyInfo, cLog)

70
docs/readme.md Normal file
View File

@ -0,0 +1,70 @@
![PierMesh logo](https://git.utopic.work/PierMesh/piermesh/raw/branch/main/piermeshicon.png)
<!-- PierMesh documentation master file, created by
sphinx-quickstart on Fri Jul 26 23:30:55 2024. -->
# PierMesh documentation
# Contents:
* [PierMesh service runner](/PierMesh/piermesh/src/branch/main/docs/run.md)
* [`Node`](/PierMesh/piermesh/src/branch/main/docs/run.md#run.Node)
* [`Node.action_initNodeDH()`](/PierMesh/piermesh/src/branch/main/docs/run.md#run.Node.action_initNodeDH)
* [`Node.action_keyDeriveDH()`](/PierMesh/piermesh/src/branch/main/docs/run.md#run.Node.action_keyDeriveDH)
* [`Node.action_map()`](/PierMesh/piermesh/src/branch/main/docs/run.md#run.Node.action_map)
* [`Node.action_sendCatch()`](/PierMesh/piermesh/src/branch/main/docs/run.md#run.Node.action_sendCatch)
* [`Node.action_sendToPeer()`](/PierMesh/piermesh/src/branch/main/docs/run.md#run.Node.action_sendToPeer)
* [`Node.cLog()`](/PierMesh/piermesh/src/branch/main/docs/run.md#run.Node.cLog)
* [`Node.fListen()`](/PierMesh/piermesh/src/branch/main/docs/run.md#run.Node.fListen)
* [`Node.monitor()`](/PierMesh/piermesh/src/branch/main/docs/run.md#run.Node.monitor)
* [`Node.toLog`](/PierMesh/piermesh/src/branch/main/docs/run.md#run.Node.toLog)
* [TUI application](/PierMesh/piermesh/src/branch/main/docs/ui.md)
* [`TUI`](/PierMesh/piermesh/src/branch/main/docs/ui.md#ui.TUI)
* [`TUI.visibleLogo`](/PierMesh/piermesh/src/branch/main/docs/ui.md#ui.TUI.visibleLogo)
* [`TUI.nodeOb`](/PierMesh/piermesh/src/branch/main/docs/ui.md#ui.TUI.nodeOb)
* [`TUI.done`](/PierMesh/piermesh/src/branch/main/docs/ui.md#ui.TUI.done)
* [`TUI.CSS_PATH`](/PierMesh/piermesh/src/branch/main/docs/ui.md#ui.TUI.CSS_PATH)
* [`TUI.action_quitFull()`](/PierMesh/piermesh/src/branch/main/docs/ui.md#ui.TUI.action_quitFull)
* [`TUI.action_toggleFullscreen()`](/PierMesh/piermesh/src/branch/main/docs/ui.md#ui.TUI.action_toggleFullscreen)
* [`TUI.compose()`](/PierMesh/piermesh/src/branch/main/docs/ui.md#ui.TUI.compose)
* [`TUI.do_set_cpu_percent()`](/PierMesh/piermesh/src/branch/main/docs/ui.md#ui.TUI.do_set_cpu_percent)
* [`TUI.do_set_mem()`](/PierMesh/piermesh/src/branch/main/docs/ui.md#ui.TUI.do_set_mem)
* [`TUI.do_write_line()`](/PierMesh/piermesh/src/branch/main/docs/ui.md#ui.TUI.do_write_line)
* [`TUI.on_mount()`](/PierMesh/piermesh/src/branch/main/docs/ui.md#ui.TUI.on_mount)
* [Configuration utilities](/PierMesh/piermesh/src/branch/main/docs/Bubble/config.md)
* [Network map representation](/PierMesh/piermesh/src/branch/main/docs/Bubble/map.md)
* [`Network`](/PierMesh/piermesh/src/branch/main/docs/Bubble/map.md#Bubble.map.Network)
* [Data routing logic and data](/PierMesh/piermesh/src/branch/main/docs/Bubble/router.md)
* [`Router`](/PierMesh/piermesh/src/branch/main/docs/Bubble/router.md#Bubble.router.Router)
* [Schemaless binary database](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md)
* [`Daisy`](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md#Components.daisy.Daisy)
* [`Daisy.get()`](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md#Components.daisy.Daisy.get)
* [`Daisy.read()`](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md#Components.daisy.Daisy.read)
* [`Daisy.sublist()`](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md#Components.daisy.Daisy.sublist)
* [`Daisy.write()`](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md#Components.daisy.Daisy.write)
* [`Cache`](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md#Components.daisy.Cache)
* [`Cache.create()`](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md#Components.daisy.Cache.create)
* [`Cache.get()`](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md#Components.daisy.Cache.get)
* [`Cache.refresh()`](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md#Components.daisy.Cache.refresh)
* [`Cache.search()`](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md#Components.daisy.Cache.search)
* [`Catch`](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md#Components.daisy.Catch)
* [`Catch.get()`](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md#Components.daisy.Catch.get)
* [`Catch.sget()`](/PierMesh/piermesh/src/branch/main/docs/Components/daisy.md#Components.daisy.Catch.sget)
* [Small internet interop utilities](/PierMesh/piermesh/src/branch/main/docs/Components/hopper.md)
* [`get()`](/PierMesh/piermesh/src/branch/main/docs/Components/hopper.md#Components.hopper.get)
* [`post()`](/PierMesh/piermesh/src/branch/main/docs/Components/hopper.md#Components.hopper.post)
* [`DHEFern`](/PierMesh/piermesh/src/branch/main/docs/Cryptography/DHEFern.md)
* [`Filter`](/PierMesh/piermesh/src/branch/main/docs/Filters/base.md)
* [`Filter.protoRoute()`](/PierMesh/piermesh/src/branch/main/docs/Filters/base.md#Filters.base.Filter.protoRoute)
* [`filter`](/PierMesh/piermesh/src/branch/main/docs/Filters/base.md#Filters.Protocols.bubble.filter)
* [`filter`](/PierMesh/piermesh/src/branch/main/docs/Filters/base.md#Filters.Protocols.catch.filter)
* [`filter`](/PierMesh/piermesh/src/branch/main/docs/Filters/base.md#Filters.Protocols.cryptography.filter)
* [`filter`](/PierMesh/piermesh/src/branch/main/docs/Filters/base.md#Filters.Protocols.map.filter)
* [Header packet: Metadata packet](/PierMesh/piermesh/src/branch/main/docs/Packets/HeaderPacket.md)
* [`Header`](/PierMesh/piermesh/src/branch/main/docs/Packets/HeaderPacket.md#Packets.HeaderPacket.Header)
* [Base packet](/PierMesh/piermesh/src/branch/main/docs/Packets/Packet.md)
* [`Packet`](/PierMesh/piermesh/src/branch/main/docs/Packets/Packet.md#Packets.Packet.Packet)
* [`Packets`](/PierMesh/piermesh/src/branch/main/docs/Packets/Packets.md)
* [`SinglePacket`](/PierMesh/piermesh/src/branch/main/docs/Packets/SinglePacket.md)
* [Layer 0 data transmission](/PierMesh/piermesh/src/branch/main/docs/Transmission/transmission.md)
* [`Transmitter`](/PierMesh/piermesh/src/branch/main/docs/Transmission/transmission.md#Transmission.transmission.Transmitter)

88
docs/run.md Normal file
View File

@ -0,0 +1,88 @@
# PierMesh service runner
Main method for running the PierMesh service
### *class* run.Node
Class that handles most of the PierMesh data
[🔗 Source](https://git.utopic.work/PierMesh/piermesh/src/branch/main/src/run.py)
#### *async* action_initNodeDH(data: dict)
Initialize diffie hellman key exchange
#### SEE ALSO
[`Cryptography.DHEFern.DHEFern`](/PierMesh/piermesh/src/branch/main/docs/Cryptography/DHEFern.md#Cryptography.DHEFern.DHEFern)
: End to end encryption functionality
#### *async* action_keyDeriveDH(data: dict)
Derive key via diffie hellman key exchange
#### *async* action_map(data: dict)
Map new network data to internal network map
#### SEE ALSO
`Bubble.network.Network`
: Layered graph etwork representation
#### *async* action_sendCatch(data: dict)
Get catch and return the data to a peer
#### SEE ALSO
[`Bubble.router.Router`](/PierMesh/piermesh/src/branch/main/docs/Bubble/router.md#Bubble.router.Router)
: Routing class
#### *async* action_sendToPeer(data: dict)
Send data to a peer connected to the server
* **Parameters:**
**data** (*dict*) Data passed from the filter, this is a generic object so its similar on all actions here
#### SEE ALSO
`Filters.Protocols`
: Protocol based packet filtering
`webui.serve.Server`
: Runs a light Microdot web server with http/s and websocket functionality
`webui.serve.Server.sendToPeer`
: Function to actually execute the action
#### cLog(priority: int, message: str)
Convenience function that logs to the ui and log files
* **Parameters:**
* **priority** (*int*) Priority of message to be passed to logging
* **message** (*str*) Message to log
* **Return type:**
None
#### *async* fListen()
Loop to watch for tasks to do
#### SEE ALSO
`Filters.base.sieve`
: Packet filtering/parsing
### Notes
We use a common technique here that calls the function from our preloaded actions via dictionary entry
#### *async* monitor()
Monitor and log ram and cpu usage
#### toLog
We store logs to be processed here
#### SEE ALSO
`logPassLoop`
: Loop to handle logging to file and TUI

72
docs/ui.md Normal file
View File

@ -0,0 +1,72 @@
# TUI application
### *class* ui.TUI(driver_class: Type[Driver] | None = None, css_path: str | PurePath | List[str | PurePath] | None = None, watch_css: bool = False)
TUI for PierMesh
[🔗 Source](https://git.utopic.work/PierMesh/piermesh/src/branch/main/src/ui.py)
#### visibleLogo
Whether the logo is visible or not, used in toggling visibility
* **Type:**
bool
#### nodeOb
Reference to the Node running the PierMesh service
* **Type:**
[Node](/PierMesh/piermesh/src/branch/main/docs/run.md#run.Node)
#### done
Whether the TUI has been killed
* **Type:**
bool
#### CSS_PATH *: ClassVar[CSSPathType | None]* *= 'ui.tcss'*
File paths to load CSS from.
#### action_quitFull()
Kill the whole stack by setting self to done and terminating the thread. We check in run.monitor later and kill the rest of the stack then with psutil
#### SEE ALSO
`run.monitor`
#### action_toggleFullscreen()
Toggle fullscreen logs by either collapsing width or setting it to its original size
#### compose()
Build the TUI
#### do_set_cpu_percent(percent: float)
Set CPU percent in the label and progress bar
* **Parameters:**
**percent** (*float*) Percent of the cpu PierMesh is using
#### do_set_mem(memmb: float)
Set memory usage label in the ui
* **Parameters:**
**memmb** (*float*) Memory usage of PierMesh in megabytes
#### do_write_line(logLine: str)
Write line to the logs panel
* **Parameters:**
**logLine** (*str*) Line to log
#### on_mount()
Called at set up, configures the title and the progess bar

BIN
imgs/bubble.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

BIN
imgs/catch.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

BIN
imgs/catchdisplay.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 275 B

BIN
imgs/daisydisplay.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 332 B

BIN
imgs/overview.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

BIN
imgs/piermeshicon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 KiB

BIN
overview.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

BIN
piermeshicon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 KiB

3
pyvenv.cfg Normal file
View File

@ -0,0 +1,3 @@
home = /usr/bin
include-system-site-packages = false
version = 3.10.12

40
readme.md Normal file
View File

@ -0,0 +1,40 @@
![PierMesh logo](piermeshicon.png)
# PierMesh
## A new internet, a fresh start
This is the monorepo for PierMesh.
Overview of PierMesh
![Overview diagram of PierMesh](overview.png)
Bubble overview
![Bubble overview diagram](bubble.png)
Catch overview
![Catch overview](catch.png)
# How to use
Note: these instructions will probably only work on Linux at the moment
Note: check the scripts to make sure they'll work with your system, and in general I reccomend checking scripts before you run them
Follow Meshtastic's guide on setting up your device: [https://meshtastic.org/docs/getting-started/](https://meshtastic.org/docs/getting-started/)
Make sure you have the latest Python installed
`
git clone https://git.utopic.work/PierMesh/piermesh
cd piermesh
python -m venv .
source bin/activate
pip install -r requirements.txt
cd src
chmod a+x ./scripts/falin
./scripts/falin
`

15
requirements.txt Executable file
View File

@ -0,0 +1,15 @@
pytap
meshtastic
beautifulsoup4
msgpack
esptool
jinja2
markdown2
watchdog
microdot
networkx[default]
psutil
sphinx
textual
textual-dev
sphinx-markdown-builder==0.6.6

0
src/Bubble/config.py Executable file
View File

3
src/Bubble/config.rst Normal file
View File

@ -0,0 +1,3 @@
Configuration utilities
==========================

71
src/Bubble/map.py Executable file
View File

@ -0,0 +1,71 @@
import networkx as nx
import msgpack
import json
import matplotlib.pyplot as plt
# TODO: Extra maps logic
class Network:
"""
Layered graph network
`🔗 Source <https://git.utopic.work/PierMesh/piermesh/src/branch/main/src/Bubble/map.py>`_
"""
def __init__(self, icg=None, file=None):
self.omap = nx.Graph()
self.imap = nx.Graph()
self.emaps = []
self.lookup = {}
def addLookup(self, onodeID, mnodeID):
self.lookup[onodeID] = mnodeID
def doLookup(self, onodeID):
if onodeID in self.lookup.keys():
return self.lookup[onodeID]
else:
return False
def export(self, path):
n = {}
n["omap"] = nx.to_dict_of_dicts(self.omap)
n["imap"] = nx.to_dict_of_dicts(self.imap)
n["emaps"] = []
for e in self.emaps:
n["emaps"].append(nx.to_dict_of_dicts(e))
# TODO: Daisy
with open(path, "wb") as f:
f.write(msgpack.dumps(n))
with open(path + ".json", "w") as f:
f.write(json.dumps(n))
def mimport(self, path):
ndata = ""
with open(path, "rb") as f:
ndata = msgpack.loads(f.read())
self.omap = nx.Graph(ndata["omap"])
self.imap = nx.Graph(ndata["imap"])
self.emaps = []
for e in ndata["emaps"]:
self.emaps.append(nx.Graph(e))
def addon(self, id):
self.omap.add_node(id)
def addoe(self, nodea, nodeb):
self.omap.add_edge(nodea, nodeb)
def addin(self, id):
self.imap.add_node(id)
def addie(self, nodea, nodeb):
self.imap.add_edge(nodea, nodeb)
def getRoute(self, senderNode, recipientNode):
return nx.shortest_path(self.omap, senderNode, recipientNode)
def render(self):
nx.draw(self.omap)
plt.savefig("tmp/omap.png")

5
src/Bubble/map.rst Normal file
View File

@ -0,0 +1,5 @@
Network map representation
==========================
.. autoclass:: Bubble.map.Network
:members:

41
src/Bubble/router.py Normal file
View File

@ -0,0 +1,41 @@
import msgpack
from Bubble.map import Network
from Components.daisy import Catch
from Components.daisy import Cache
import random, logging
# DONE Catch integration
# TODO: Catch examples
#
# 🐟+🔥+🤤
# prmh.catchexample.rabbit.hutch
# 🟥|🟧|🟦
# 🖐️📡🌍
# prmh@Ashwood_Skye
# 💻💻
class Router:
def __init__(self, cLog, nfpath="server.info"):
self.cLog = cLog
# TODO: Better network init
self.n = Network()
self.c = Catch(walk=True)
self.cache = Cache(walk=True)
self.cLog(10, "Loading server info")
self.serverInfo = self.cache.get(nfpath)
if self.serverInfo == False:
self.cache.create(nfpath, {"nodeID": random.randrange(0, 1000000)})
self.serverInfo = self.cache.get(nfpath)
self.n.addin(self.serverInfo.get()["nodeID"])
def getRoute(self, headerPacket):
headerPacket = msgpack.loads(headerPacket)
peer = headerPacket["recipient"]
node = headerPacket["node"]
def getCatch(self, head, tail, fins=None):
return self.c.get(head, tail, fins=fins)
def addc(self, peer, node, seperator, head, tail, data, fins=None):
self.c.addc(peer, node, seperator, head, tail, data, fins=fins)

5
src/Bubble/router.rst Normal file
View File

@ -0,0 +1,5 @@
Data routing logic and data
===========================
.. autoclass:: Bubble.router.Router
:members:

27
src/Bubble/tests/main.py Executable file
View File

@ -0,0 +1,27 @@
from Bubble.map import Network
import random
# TODO: Creating network from 0
n = Network()
onodes = [random.randrange(1, 1000000) for it in range(20)]
for onode in onodes:
n.addon(onode)
n.addin(onode)
for it in range(10):
nodea = random.choice(onodes)
print(nodea)
nodeb = random.choice([n for n in onodes if n != nodea])
print(nodeb)
n.addoe(nodea, nodeb)
for it in range(10):
nodea = random.choice(onodes)
nodeb = random.choice([n for n in onodes if n != nodea])
#print(n.getRoute(nodea, nodeb))
n.export("tmp/map.bin")
n.render()

7
src/Clients/base.py Executable file
View File

@ -0,0 +1,7 @@
import random
class Client:
def __init__(self, nodeID, permissions=False):
self.cid = random.randrange(1, 1000000)
self.nodeID = nodeID

View File

423
src/Components/daisy.py Executable file
View File

@ -0,0 +1,423 @@
import os
import json
import msgpack
import Cryptography
import random
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
import logging
# TODO: delete
# TODO: propagate json changes to msgpack automatically
# TODO: propagate msgpack changes to cache automatically
# TODO: Indexing
def _json_to_msg(path: str):
"""
Convert json at the path plus .json to a msgpack binary
Parameters
----------
path: str
Path to json minus the extension
"""
rpath = path + ".json"
res = ""
with open(rpath) as f:
res = msgpack.dumps(json.load(f))
with open(path, "wb") as f:
f.write(res)
class Daisy:
"""
Base class for Daisy data representation
`🔗 Source <https://git.utopic.work/PierMesh/piermesh/src/branch/main/Components/daisy.py>`_
"""
def __init__(
self,
filepath: str,
templates: dict = {},
template: bool = False,
prefillDict: bool = False,
):
"""
Parameters
----------
filepath: str
Path to disk location
templates: dict
Dictionary of templates to Use
template: bool
Which template to Use
prefillDict: bool
Whether to fill the record with a template
"""
self.filepath = filepath
if os.path.exists(filepath) != True:
with open(filepath, "wb") as f:
if template != False:
if template in templates.keys():
t = templates[template].get()
if prefillDict != False:
for k in prefillDict.keys():
t[k] = prefillDict[k]
f.write(msgpack.dumps(t))
self.msg = t
else:
print("No such template as: " + template)
else:
f.write(msgpack.dumps({}))
self.msg = {}
elif os.path.isdir(filepath):
self.msg = "directory"
else:
with open(filepath, "rb") as f:
self.msg = msgpack.loads(f.read())
# Use override for updating
def write(
self,
override=False,
encrypt: bool = False,
encryptKey=None,
recur: bool = False,
):
"""
Write record to disk
Parameters
----------
override
Either false or a dictionary of values to set on the record
encrypt: bool
Whether to encrypt the record (TODO)
encryptKey
Key to encrypt record with, or None if not set
recur: bool
Whether to recursively handle keys
"""
if override != False:
for key in override.keys():
# TODO: Deeper recursion
if recur:
if not key in self.msg.keys():
self.msg[key] = {}
for ikey in override[key].keys():
self.msg[key][ikey] = override[key][ikey]
else:
self.msg[key] = override[key]
data = msgpack.dumps(self.msg)
with open(self.filepath, "wb") as f:
f.write(data)
# Use for refreshing
def read(self, decrypt: bool = False, decryptKey=False):
"""
Read record from disk to memory
Parameters
----------
decrypt: bool
Whether to decrypt record
decryptKey
Key to decrypt record
"""
if os.path.isdir(self.filepath):
self.msg = "directory"
else:
with open(self.filepath, "rb") as f:
self.msg = msgpack.loads(f.read())
def get(self):
"""
Get record dictionary from memory
Returns
-------
self.msg: dict
"""
return self.msg
def sublist(self):
"""
Lists contents of directory if object is a directory, otherwise return None
"""
fpath = self.filepath
if os.path.isdir(fpath):
return ["messages/" + x for x in os.listdir(fpath)]
else:
return None
def loadTemplates(templatePath: str = "templates"):
"""Load templates for prefilling records
Parameters
----------
templatePath: str
Path to templates
"""
templates = {}
for p in os.listdir(templatePath):
p = templatePath + "/" + p
if os.path.isdir(p):
for ip in os.listdir(p):
ip = p + "/" + ip
if os.path.isdir(ip):
print("Too deep, skipping: " + ip)
else:
templates[ip] = Daisy(ip)
else:
templates[p] = Daisy(p)
self.templates = templates
return templates
class CFSHandler(FileSystemEventHandler):
"""
File system watchdog that propagates disk changes to records to their proper cache
"""
def __init__(self, cache, isCatch: bool = False):
"""
Parameters
----------
cache: Cache
Daisy cache to update
isCatch: bool
Is the cache for catchs
"""
self.cache = cache
self.isCatch = isCatch
super().__init__()
def on_any_event(self, event):
"""
Called when a CRUD operation is performed on a record file
Parameters
----------
event
Event object provided by watchdog
"""
if not (".json" in event.src_path):
if not (".md" in event.src_path):
tpath = "/".join(event.src_path.split("/")[1:])
if tpath != "":
if self.isCatch:
self.cache.sget(tpath)
else:
self.cache.get(tpath).get()
# TODO: Dumping to cacheFile
class Cache:
"""
In memory collection of Daisy records
"""
def __init__(
self,
filepaths=None,
cacheFile=None,
path: str = "daisy",
walk: bool = False,
isCatch: bool = False,
):
"""
Parameters
----------
filepaths
Either a list of filepaths to load or None
cacheFile
Path to a cache file which is a collection of paths to load
path: str
Path prefix to load records from
walk: bool
Whether to automatically walk the path and load records
isCatch: bool
Whether this cache is for catchs
"""
self.data = {}
self.path = path
self.event_handler = CFSHandler(self, isCatch=isCatch)
self.observer = Observer()
self.observer.schedule(self.event_handler, self.path, recursive=True)
self.observer.start()
# TODO: Test
if filepaths != None:
for fp in filepaths:
fp = path + "/" + fp
if os.path.isfile(fp):
self.data[fp] = Daisy(fp)
elif cacheFile != None:
with open(cacheFile, "r") as f:
for fp in f.read().split("\n"):
self.data[fp] = Daisy(fp)
elif walk:
for root, dirs, files in os.walk(self.path):
for p in dirs + files:
# print("walking")
if not (".json" in p):
if not (".md" in p):
tpath = root + "/" + p
# print(p)
# print(tpath)
self.data[tpath] = Daisy(tpath)
def create(self, path: str, data: dict):
"""
Create new record
Parameters
----------
path: str
Path to create record at
data: dict
Data to populate record with
"""
with open(self.path + "/" + path, "wb") as f:
f.write(msgpack.dumps(data))
logging.log(10, "Done creating record")
self.data[path] = Daisy(self.path + "/" + path)
logging.log(10, "Done loading to Daisy")
return self.data[path]
def get(self, path: str):
"""
Get record at path, else return False
path: str
Path of record
"""
if path in self.data.keys():
return self.data[path]
else:
if os.path.exists(self.path + "/" + path):
self.data[path] = Daisy(self.path + "/" + path)
return self.data[path]
else:
logging.log(10, "File does not exist")
return False
def refresh(self):
"""
Reload from disk to memory
"""
for key in self.data.keys():
self.data[key].read()
def search(self, keydict: dict, strict: bool = True):
"""
Search cache for record for records with values
keydict: dict
Values to search for
strict: bool
Whether to require values match
"""
results = []
for key, val in self.data.items():
val = val.get()
if strict and type(val) != str:
addcheck = False
for k, v in keydict.items():
if k in val.keys():
if v in val[k]:
addcheck = True
else:
addcheck = False
break
if addcheck:
results.append([key, val])
elif type(val) != str:
for k, v in keydict.items():
if k in val.keys():
if v in val[k]:
results.append([key, val])
return results
class Catch(Cache):
"""
Sub class of Cache for handling catchs
.. image:: https://git.utopic.work/PierMesh/piermesh/raw/branch/main/imgs/catchdisplay.png
"""
catches = {}
def __init__(
self, path: str = "catch", filepaths=None, catchFile=None, walk: bool = False
):
"""
Basically the same initialization parameters as Catch
"""
super().__init__(
filepaths=filepaths, cacheFile=catchFile, path=path, walk=walk, isCatch=True
)
# TODO: Fins
def sget(self, path: str):
"""
Call Cache's get to get record
"""
return super().get(path)
def get(self, head: str, tail: str, fins=None):
"""
Get catch by pieces
Parameters
----------
head: str
First part of catch (maximum: 4 characters)
tail: str
Second part of catch (maximum: 16 characters)
fins
List of (maximum 8 characters) strings at the end of the catch oe None if none
"""
r = self.search({"head": head, "tail": tail})
return r[0][1]["html"]
def addc(self, peer, node, seperator, head, tail, data, fins=None):
tnpath = "catch/" + node
if os.path.exists(tnpath) != True:
os.makedirs(tnpath)
tppath = tnpath + "/" + peer
if os.path.exists(tppath) != True:
os.makedirs(tppath)
sid = str(random.randrange(0, 999999)).zfill(6)
data["seperator"] = seperator
data["head"] = head
data["tail"] = tail
if fins != None:
data["fins"] = fins
res = self.create("{0}/{1}/{2}".format(node, peer, sid), data)
return [sid, res]

9
src/Components/daisy.rst Normal file
View File

@ -0,0 +1,9 @@
Schemaless binary database
==========================
.. autoclass:: Components.daisy.Daisy
:members:
.. autoclass:: Components.daisy.Cache
:members:
.. autoclass:: Components.daisy.Catch
:members:

37
src/Components/hopper.py Executable file
View File

@ -0,0 +1,37 @@
import requests
import msgpack
import lzma
from Packets.Packets import Packets
def get(url: str, params=None):
"""
http/s get request
Parameters
----------
url: str
params
Requests (library) parameters
"""
r = requests.get(url, params=params)
r = {"response": r.text, "code": r.status_code}
return Packets(lzma.compress(msgpack.dumps(r))).get()
def post(url: str, params=None):
"""
http/s post request
Parameters
----------
url: str
params
Requests (library) parameters
"""
r = requests.post(url, datan=params)
r = {"response": r.text, "code": r.status_code}
return Packets(lzma.compress(msgpack.dumps(r))).get()

View File

@ -0,0 +1,5 @@
Small internet interop utilities
================================
.. automodule:: Components.hopper
:members:

205
src/Cryptography/DHEFern.py Executable file
View File

@ -0,0 +1,205 @@
import base64, os
from cryptography.fernet import Fernet
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.asymmetric import dh
from cryptography.hazmat.primitives.kdf.hkdf import HKDF
from cryptography.hazmat.primitives.serialization import (
Encoding,
NoEncryption,
ParameterFormat,
PublicFormat,
PrivateFormat,
)
import cryptography.hazmat.primitives.serialization as Serialization
import msgpack
# TODO: Different store directories per node
class DHEFern:
def __init__(self, cache, nodeNickname, cLog):
self.cLog = cLog
self.loadedParams = {}
self.loadedKeys = {}
self.nodeNickname = nodeNickname
self.cache = cache
if self.cache.get("cryptography/{0}/paramStore".format(nodeNickname)) == False:
self.initStore("param")
else:
self.params = self.loadParamBytes(
self.cache.get(
"cryptography/{0}/paramStore".format(nodeNickname)
).get()["self"]
)
if self.cache.get("cryptography/{0}/keyStore".format(nodeNickname)) == False:
self.initStore("key")
self.genKeyPair()
else:
tks = self.cache.get("cryptography/{0}/keyStore".format(nodeNickname)).get()
self.publicKey = tks["self"]["publicKey"]
self.privateKey = tks["self"]["privateKey"]
def checkInMem(self, store, nodeID):
if store == "param":
return nodeID in self.loadedParams.keys()
elif store == "key":
return nodeID in self.loadedKeys.keys()
def loadRecordToMem(self, store, nodeID):
r = self.getRecord(store, nodeID)
if r == False:
self.cLog(
30, "Tried to load nonexistent {0} for node {1}".format(store, nodeID)
)
return False
elif self.checkInMem(store, nodeID):
self.cLog(10, "{0}s already deserialized, skipping".format(store))
else:
if store == "param":
self.loadedParams[nodeID] = self.loadParamBytes(r)
elif store == "key":
self.loadedKeys[nodeID] = {
"publicKey": Serialization.load_pem_public_key(r["publicKey"]),
"privateKey": Serialization.load_pem_private_key(
r["privateKey"], None
),
}
return True
# TODO: Store class daisy
#
def getRecord(self, store, key):
r = self.cache.get(
"cryptography/{0}/{1}Store".format(self.nodeNickname, store)
).get()
if r == False:
self.cLog(20, "Record does not exist")
return False
else:
if key in r.keys():
return r[key]
else:
self.cLog(20, "Record does not exist")
return False
def initStore(self, store):
if not os.path.exists("daisy/cryptography/" + self.nodeNickname):
os.mkdir("daisy/cryptography/" + self.nodeNickname)
if store == "param":
self.genParams()
self.cache.create(
"cryptography/{0}/paramStore".format(self.nodeNickname),
{"self": self.getParamsBytes()},
)
elif store == "key":
self.cache.create(
"cryptography/{0}/keyStore".format(self.nodeNickname), {"self": {}}
)
else:
self.cLog(30, "Store not defined")
def updateStore(self, store, entry, data, recur=True):
self.cache.get(
"cryptography/" + self.nodeNickname + "/" + store + "Store"
).write(override={entry: data}, recur=recur)
def genParams(self):
params = dh.generate_parameters(generator=2, key_size=2048)
self.params = params
return params
def genKeyPair(self, paramsOverride=False, setSelf=True):
privateKey = self.params.generate_private_key()
if setSelf:
self.privateKey = privateKey
publicKey = privateKey.public_key()
if setSelf:
self.publicKey = publicKey
self.updateStore(
"key",
"self",
{
"publicKey": self.publicKey.public_bytes(
Encoding.PEM, PublicFormat.SubjectPublicKeyInfo
),
"privateKey": self.privateKey.private_bytes(
Encoding.PEM, PrivateFormat.PKCS8, NoEncryption()
),
},
)
return [privateKey, publicKey]
else:
publicKey = publicKey.public_bytes(
Encoding.PEM, PublicFormat.SubjectPublicKeyInfo
)
privateKey = privateKey.private_bytes(
Encoding.PEM, PrivateFormat.PKCS8, NoEncryption()
)
return [privateKey, publicKey]
def keyDerive(self, pubKey, salt, nodeID, params):
if self.checkInMem("param", nodeID) == False:
if self.getRecord("param", nodeID) == False:
self.updateStore("param", nodeID, params, recur=False)
self.loadRecordToMem("param", nodeID)
self.cLog(20, "Precheck done for key derivation")
# TODO: Load them and if private key exists load it, otherwise generate a private key
if self.checkInMem("key", nodeID) == False:
if self.getRecord("key", nodeID) == False:
privateKey, publicKey = self.genKeyPair(setSelf=False)
self.updateStore(
"key", nodeID, {"publicKey": publicKey, "privateKey": privateKey}
)
self.loadRecordToMem("key", nodeID)
sharedKey = self.loadedKeys[nodeID]["privateKey"].exchange(
Serialization.load_pem_public_key(pubKey)
)
# Perform key derivation.
self.cLog(20, "Performing key derivation")
derivedKey = HKDF(
algorithm=hashes.SHA256(), length=32, salt=salt, info=b"handshake data"
).derive(sharedKey)
self.cLog(20, "Derived key")
ederivedKey = base64.urlsafe_b64encode(derivedKey)
tr = self.getRecord("key", nodeID)
tr["derivedKey"] = ederivedKey
self.updateStore("key", nodeID, tr)
self.cLog(20, "Done with cryptography store updates")
return ederivedKey
def getSalt(self):
return os.urandom(16)
def encrypt(self, data, nodeID, isDict=True):
r = self.getRecord("key", nodeID)
if r == False:
self.cLog(20, "Node {0} not in keystore".format(nodeID))
return False
else:
derivedKey = r["derivedKey"]
fernet = Fernet(derivedKey)
if isDict:
data = msgpack.dumps(data)
token = fernet.encrypt(data)
return token
def decrypt(self, data, nodeID):
r = self.getRecord("key", nodeID)
if r == False:
self.cLog(20, "No record of node " + nodeID)
return False
elif not "derivedKey" in r.keys():
self.cLog(20, "No key derived for node " + nodeID)
return False
else:
fernet = Fernet(self.getRecord("key", nodeID)["derivedKey"])
return msgpack.loads(fernet.decrypt(data))
def getParamsBytes(self):
return self.params.parameter_bytes(Encoding.PEM, ParameterFormat.PKCS3)
def loadParamBytes(self, pemBytes):
self.params = Serialization.load_pem_parameters(pemBytes)
return self.params

View File

@ -0,0 +1,7 @@
Diffie hellman ephemeral
Fernet based encryption
==========================
.. autoclass:: Cryptography.DHEFern.DHEFern
:members:

View File

@ -0,0 +1,15 @@
import DHEFern as d
from cryptography.fernet import Fernet
# Small enough to send in one packet
parameters = d.getParams()
myKey = d.getPrivateKey(parameters)
yourKey = d.getPublicKey(parameters)
salt = d.getSalt()
derived_key = d.keyDerive(myKey, yourKey, salt)
# Put this somewhere safe!
#key = Fernet.generate_key()
# TODO: how to use derived key
#print(derived_key)
f = Fernet(derived_key)
token = f.encrypt(b"A really secret message. Not for prying eyes.")
print(f.decrypt(token).decode("utf-8"))

View File

View File

@ -0,0 +1,14 @@
async def filter(completeMessage, recipient, recipientNode, onodeID, todo):
m = completeMessage
if recipientNode == onodeID:
todo.append(
{
"action": "sendToPeer",
"data": {"res": m["data"]["data"], "recipient": recipient},
}
)
else:
# TODO: Forwarding message to next node
# TODO: Method to get next node in path to recipient node
# self.t.addPackets(m.data, sender, senderDisplay, recipient, recipientNode)
pass

View File

@ -0,0 +1,16 @@
async def filter(completeMessage, recipient, recipientNode, todo):
m = completeMessage
# TODO: Sending to other nodes clients
todo.append(
{
"action": "sendCatch",
"data": {
"toLocal": True,
"recipientNode": recipientNode,
"recipient": recipient,
"head": m["head"],
"body": m["body"],
"fins": m["fins"],
},
}
)

View File

@ -0,0 +1,15 @@
import logging
async def filter(completeMessage, recipientNode, todo):
todo.append(
{
"action": "keyDeriveDH",
"data": {
"publicKey": completeMessage["data"]["publicKey"],
"params": completeMessage["data"]["params"],
"recipientNode": recipientNode,
},
}
)
logging.log(10, "Adding cryptography request")

View File

@ -0,0 +1,20 @@
import logging
async def filter(completeMessage, todo):
m = completeMessage
todo.append(
{
"action": "map",
"data": {
"onodeID": m["data"]["onodeID"],
"mnodeID": m["data"]["mnodeID"],
},
}
)
todo.append(
{
"action": "initNodeDH",
"data": {"mnodeID": m["data"]["mnodeID"], "onodeID": m["data"]["onodeID"]},
}
)

121
src/Filters/base.py Normal file
View File

@ -0,0 +1,121 @@
from Components.daisy import Cache
from Packets.Packets import Packets
import msgpack, lzma
from Packets.Packet import Packet
from Packets.Packets import Packets
import logging
import Filters.Protocols.bubble
import Filters.Protocols.map
import Filters.Protocols.catch
import Filters.Protocols.cryptography
import asyncio
import traceback
# ✅ TODO: Cache integration for messages
class Filter:
def __init__(self, cache, onodeID, todo, cLog):
self.cLog = cLog
self.cache = cache
# Note: Messages is temporary storage
# for unfinished messages
self.messages = {}
self.completed = []
self.todo = todo
self.onodeID = onodeID
def mCheck(self, payload):
try:
msgpack.loads(payload)
return True
except Exception as e:
self.cLog(20, "Not msgpack encoded, skipping")
return False
def selfCheck(self, packet):
if packet["fromId"] == packet["toId"]:
self.cLog(20, "Self packet, ignored")
return False
else:
return True
async def protoMap(self, protocolID):
protocolID = str(protocolID).zfill(6)
return self.cache.get("mlookup").get()[protocolID]
async def protoRoute(self, completeMessage):
"""
Shorthand reference
"""
m = completeMessage
sender = m["sender"]
senderDisplay = m["senderDisplayName"]
recipient = m["recipient"]
recipientNode = m["recipientNode"]
# TODO: Fix packets to use class
protocol = await self.protoMap(m["packetsClass"])
self.cLog(20, "Protocol: " + protocol)
if protocol == "bubble":
await Filters.Protocols.bubble.filter(
m, recipient, recipientNode, self.onodeID, self.todo
)
elif protocol == "map":
await Filters.Protocols.map.filter(m, self.todo)
elif protocol == "catch":
await Filters.Protocols.catch.filter(m, recipient, recipientNode, self.todo)
elif protocol == "cryptography":
await Filters.Protocols.cryptography.filter(
completeMessage, recipientNode, self.todo
)
else:
self.cLog(30, "Cant route, no protocol")
async def sieve(self, packet):
p = packet["decoded"]["payload"]
if self.selfCheck(packet) and self.mCheck(p):
try:
p = msgpack.loads(p)
self.cLog(20, p)
packetsID = p["packetsID"]
packetsClass = p["packetsClass"]
if packetsID == 0:
self.cLog(20, "Single packet")
# Do sp handling
pass
if packetsID in self.completed:
raise ValueError("Message already completed")
if not (packetsID in self.messages):
self.messages[packetsID] = {
"packetCount": p["packetCount"],
"data": [],
"finished": False,
"dataOrder": [],
}
if "wantFullResponse" in p.keys():
for k in p.keys():
if k != "data":
self.messages[packetsID][k] = p[k]
elif not p["packetNumber"] in self.messages[packetsID]["dataOrder"]:
self.messages[packetsID]["data"].append(p["data"])
self.messages[packetsID]["dataOrder"].append(p["packetNumber"])
if (len(self.messages[packetsID]["data"])) >= (
self.messages[packetsID]["packetCount"] - 1
) and ("wantFullResponse" in self.messages[packetsID].keys()):
self.cLog(20, "Finished receiving for message " + str(packetsID))
self.messages[packetsID]["finished"] = True
if self.messages[packetsID]["wantFullResponse"] != False:
# TO DO: implement loop
# responseLoop(packets_id)
pass
# TODO: Sorting
completeMessage = self.messages[packetsID]
completeMessage["data"] = Packets.reassemble(None, completeMessage)
del self.messages[packetsID]
self.completed.append(packetsID)
self.cLog(20, "Assembly completed, routing")
# self.cache.create("messages/" + str(packetsID), cm)
await self.protoRoute(completeMessage)
except Exception as e:
self.cLog(30, traceback.print_exc())

18
src/Filters/base.rst Normal file
View File

@ -0,0 +1,18 @@
Primary filtering functionality
Dispatches to Protocols
===============================
.. autoclass:: Filters.base.Filter
:members:
.. autoclass:: Filters.Protocols.bubble.filter
:members:
.. autoclass:: Filters.Protocols.catch.filter
:members:
.. autoclass:: Filters.Protocols.cryptography.filter
:members:
.. autoclass:: Filters.Protocols.map.filter
:members:

20
src/Makefile Normal file
View File

@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

53
src/Packets/HeaderPacket.py Executable file
View File

@ -0,0 +1,53 @@
from Packets.Packet import Packet
import Components.daisy as d
import msgpack
class Header(Packet):
def __init__(
self,
packetsID,
packetCount,
sender,
senderDisplayName,
recipient,
recipientNode,
json=True,
fname=False,
subpacket=False,
wantFullResponse=False,
mimeType=-1,
protocol=None,
packetsClass=0,
):
super().__init__(
"", packetsID=packetsID, packetCount=packetCount, packetsClass=packetsClass
)
self.sender = sender
self.senderDisplayName = senderDisplayName
self.recipient = recipient
self.recipientNode = recipientNode
self.json = json
self.fname = fname
self.subpacket = subpacket
self.wantFullResponse = wantFullResponse
self.mimeType = mimeType
def usePreset(self, path):
preset = d.Daisy(path)
for key in preset.get().keys():
self.msg[key] = preset.get()[key]
def dump(self):
res = msgpack.loads(super().dump())
res["sender"] = self.sender
res["senderDisplayName"] = self.senderDisplayName
res["recipient"] = self.recipient
res["recipientNode"] = self.recipientNode
# res["json"] = self.json
# res["fname"] = self.fname
res["subpacket"] = self.subpacket
res["wantFullResponse"] = self.wantFullResponse
res["mimeType"] = self.mimeType
# res["protocol"] = self.protocol
return msgpack.dumps(res)

View File

@ -0,0 +1,5 @@
Header packet: Metadata packet
===============================
.. autoclass:: Packets.HeaderPacket.Header
:members:

54
src/Packets/Packet.py Executable file
View File

@ -0,0 +1,54 @@
import lzma, sys
from Components.daisy import Daisy
import msgpack
import logging
class Packet:
def parsePayload(data):
msg = msgpack.loads(data)
return [
msg["packetsID"],
msg["packetNumber"],
lzma.decompress(msg["data"]),
msg["packetsClass"],
]
def __init__(
self,
data,
packetsID=False,
packetNumber=False,
packetCount=1,
packetsClass=-1,
):
if packetsID == False:
self.packetsID, self.packetNumber, self.data, self.packetsClass = (
self.parsePayload(data)
)
else:
self.packetsID = packetsID
self.packetNumber = packetNumber
self.data = data
self.packetsClass = packetsClass
if packetsClass != -1:
pass
"""template = Daisy("daisy/packet_templates/template.lookup").get()[
str(packetsClass).zfill(2)
]
edata = Daisy("daisy/packet_templates/" + template)
for key in edata.get().keys():
self.data[key] = edata.get()[key]"""
def dump(self):
res = {
"packetsID": self.packetsID,
"packetNumber": self.packetNumber,
"data": self.data,
"packetsClass": self.packetsClass,
}
if res["data"] == "":
res.pop("data")
ores = msgpack.dumps(res)
logging.log(20, "Packet size: " + str(sys.getsizeof(ores)))
return ores

5
src/Packets/Packet.rst Normal file
View File

@ -0,0 +1,5 @@
Base packet
===============================
.. autoclass:: Packets.Packet.Packet
:members:

91
src/Packets/Packets.py Executable file
View File

@ -0,0 +1,91 @@
import Packets.Packet as p
import Packets.HeaderPacket as h
import lzma
import msgpack
import random
import sys
import math
# Reassemble method
# Test
# Polymorph to accept payload array, done
# Create packet instance with payload
# Add to and from node ids
# Node id generation, random, checked against existing ids
# DO NOT CHANGE DATA SIZE UNLESS YOU KNOW WHAT YOURE DOING
class Packets:
def __init__(
self,
bytesObject,
sender,
senderDisplayName,
recipient,
recipientNode,
dataSize=128,
wantFullResponse=False,
packetsClass=None,
):
if isinstance(bytesObject, list):
# TODO: instantiating HeaderPacket correctly
packets = [h.Header(bytesObject[0])]
for packet in bytesObject:
packets.append(
p.Packet(
packet["data"],
packetsID=packet["packetsID"],
packetNumber=packet["packetNumber"],
packetsClass=packetsClass,
)
)
self.packets = packets
else:
bytesObject = lzma.compress(bytesObject)
packets = []
self.packetsID = random.randrange(0, 999999)
pnum = 1
blen = math.ceil(len(bytesObject) / dataSize)
tb = b""
for it in range(blen):
if it >= (blen - 1):
b = bytesObject[it * dataSize :]
else:
b = bytesObject[it * dataSize : (it * dataSize + dataSize)]
packets.append(
p.Packet(b, self.packetsID, pnum, packetsClass=packetsClass)
)
pnum += 1
tb += b
packets.insert(
0,
h.Header(
self.packetsID,
pnum,
sender,
senderDisplayName,
recipient,
recipientNode,
wantFullResponse=wantFullResponse,
packetsClass=packetsClass,
),
)
for it in range(pnum):
packet = msgpack.loads(packets[it].dump())
packet["packetCount"] = pnum
packets[it] = msgpack.dumps(packet)
self.packets = packets
def get(self):
return self.packets
def reassemble(self, cm):
data = b""
for it in range(1, int(cm["packetCount"])):
data += cm["data"][cm["dataOrder"].index(it)]
res = msgpack.loads(lzma.decompress(data))
return res

6
src/Packets/Packets.rst Normal file
View File

@ -0,0 +1,6 @@
Packets representation for full
message
===============================
.. autoclass:: Packets.Packets.Packets
:members:

16
src/Packets/SinglePacket.py Executable file
View File

@ -0,0 +1,16 @@
from .Packet import Packet
import msgpack, lzma
# TODO: Instantiation
# TODO: Packet template loading
class SinglePacket(Packet):
def __init__(self, data, packetsID, packetsClass=None, cache=None):
super().__init__(
lzma.compress(msgpack.dumps(data)),
packetsID=packetsID,
packetNumber=0,
packetsClass=packetsClass,
)
self.packetCount = 1

View File

@ -0,0 +1,6 @@
SinglePacket: Singular packet
for very low data applications
===============================
.. autoclass:: Packets.SinglePacket.SinglePacket
:members:

0
src/Packets/SubPacket.py Normal file
View File

View File

@ -0,0 +1,3 @@
SubPacket for handling
individual packets of submessages
=================================

View File

View File

@ -0,0 +1,3 @@
SubPackets for handling
full submessages
=======================

View File

@ -0,0 +1,223 @@
from sys import getsizeof
import meshtastic
import meshtastic.serial_interface
from pubsub import pub
from Packets.Packets import Packets as Packets
from Packets.SinglePacket import SinglePacket
import time
from threading import Thread
from Components.daisy import Catch, Cache
import sys
import logging
# from Filters.base import Filter
import msgpack
import asyncio
import random
class Transmitter:
def __init__(self, device, filter, onodeID, cache, catch, cryptographyInfo, cLog):
self.cLog = cLog
self.cryptographyInfo = cryptographyInfo
self.filter = filter
self.tcache = cache
self.tcatch = catch
self.html = False
self.notConnected = True
self.messages = {}
self.acks = {}
# self.threads = {}
self.onodeID = onodeID
# Be careful with this
self.cpid = 0
self.tasks = {}
# TODO: use node id to deliver directly
pub.subscribe(self.onReceive, "meshtastic.receive")
pub.subscribe(self.onConnection, "meshtastic.connection.established")
self.interface = meshtastic.serial_interface.SerialInterface(device)
i = 0
while self.notConnected:
if i % 5000000 == 0:
self.cLog(20, "Waiting for node initialization...")
i += 1
self.cLog(20, "Initialized")
# TODO: Sending packets across multiple nodes/load balancing/distributed packet transmission/reception
def onReceive(self, packet, interface):
asyncio.new_event_loop().run_until_complete(self.filter.sieve(packet))
self.tcache.refresh()
async def sendAnnounce(self):
await self.addPackets(
msgpack.dumps(
{
"onodeID": self.onodeID,
"mnodeID": self.interface.localNode.nodeNum,
}
),
self.onodeID,
None,
True,
None,
packetsClass=0,
)
def onConnection(self, interface, topic=pub.AUTO_TOPIC):
# self.send("connect".encode("utf-8"))
# time.sleep(3)
asyncio.run(self.sendAnnounce())
self.notConnected = False
def responseCheck(self, packet):
rid = packet["decoded"]["requestId"]
if packet["decoded"]["routing"]["errorReason"] == "MAX_RETRANSMIT":
self.cLog(20, "Got ack error")
self.acks[str(rid)] = False
else:
self.acks[str(rid)] = True
# TODO: Threaded send method
def send(self, packet, recipientNode=False):
interface = self.interface
if recipientNode == False:
pid = interface.sendData(
packet, wantAck=True, onResponse=self.responseCheck
)
else:
pid = interface.sendData(
packet,
destinationId=recipientNode,
wantAck=True,
onResponse=self.responseCheck,
)
# Can I use waitForAckNak on cpid?
self.cpid = pid.id
return True
async def awaitResponse(self, pid):
for i in range(120):
await asyncio.sleep(1)
if str(pid) in self.acks:
break
return True
async def initNodeDH(self, dhefOb, recipientNode, onodeID):
await self.addPackets(
msgpack.dumps(
{"params": dhefOb.getParamsBytes(), "publicKey": dhefOb.publicKey}
),
self.onodeID,
000000,
000000,
onodeID,
directID=recipientNode,
packetsClass=3,
)
def awaitFullResponse(self, pid):
for i in range(1_000_000_000):
time.sleep(5)
if pid in self.messages.keys():
if self.messages[pid]["finished"]:
break
return True
async def addPackets(
self,
data,
sender,
senderName,
recipient,
recipientNode,
directID=False,
packetsClass=None,
encrypt=False,
):
interface = self.interface
tp = Packets(
data,
sender,
senderName,
recipient,
recipientNode,
packetsClass=packetsClass,
)
# print(sys.getsizeof(tp.packets[0]))
# print(tp.packets[0])
for p in tp.packets:
# time.sleep(5)
if recipientNode == None:
# print("sending none")
# print(p)
self.send(p)
else:
# print(p)
# print(recipientNode)
self.cLog(10, "Sending target: " + str(directID))
if directID != False:
recipientNode = directID
self.send(p, recipientNode=recipientNode)
awaitTask = asyncio.create_task(self.awaitResponse(self.cpid))
await asyncio.sleep(1)
currentTask = {
"ob": awaitTask,
"pid": str(self.cpid),
"packet": p,
"retry": False,
}
self.tasks[str(self.cpid)] = currentTask
async def progressCheck(self):
# interface = self.interface
while True:
await asyncio.sleep(90)
self.cLog(
20, "Checking progress of {0} tasks".format(len(self.tasks.keys()))
)
doneFlag = True
dcTasks = [k for k in self.tasks.keys()]
for task in dcTasks:
task = self.tasks[task]
if task["ob"]:
if task["pid"] in self.acks:
if not self.acks[task["pid"]]:
retry = task["retry"]
remove = False
if retry == False:
retry = 1
elif retry < 3:
retry += 1
else:
self.cLog(30, "Too many retries")
remove = True
if remove:
del self.tasks[task["pid"]]
else:
self.cLog(20, "Doing retry")
doneFlag = False
# TODO: Resend to specific node
self.send(task["packet"])
await_thread = asyncio.create_task(
self.awaitResponse(task["pid"])
)
await asyncio.sleep(1)
currentTask = {
"ob": await_thread,
"pid": str(self.cpid),
"packet": task["packet"],
}
currentTask["retry"] = retry
self.tasks[task["pid"]] = currentTask
else:
del self.tasks[task["pid"]]
async def announce(self):
while True:
self.cLog(10, "Announce")
await self.sendAnnounce()
await asyncio.sleep(180)

View File

@ -0,0 +1,5 @@
Layer 0 data transmission
===============================
.. autoclass:: Transmission.transmission.Transmitter
:members:

0
src/__init__.py Normal file
View File

1
src/builddocs Executable file
View File

@ -0,0 +1 @@
sphinx-build -M markdown source docs

37
src/conf.py Normal file
View File

@ -0,0 +1,37 @@
import os
import sys
sys.path.insert(0, os.path.abspath("../src"))
# Configuration file for the Sphinx documentation builder.
#
# For the full list of built-in configuration values, see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information
project = "PierMesh"
copyright = "2024, Agie Ashwood"
author = "Agie Ashwood"
release = "Proto"
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration
extensions = ["sphinx_markdown_builder", "sphinx.ext.autodoc", "sphinx.ext.napoleon"]
templates_path = ["_templates"]
exclude_patterns = []
# -- Options for HTML output -------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output
html_theme = "alabaster"
html_static_path = ["_static"]
# Added markdown configuration
markdown_http_base = "/PierMesh/piermesh/src/branch/main/docs"

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,10 @@
<!-- PierMesh documentation master file, created by
sphinx-quickstart on Fri Jul 26 23:30:55 2024.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive. -->
# PierMesh documentation
Add your content using `reStructuredText` syntax. See the
[reStructuredText](https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html)
documentation for details.

19
src/index.rst Normal file
View File

@ -0,0 +1,19 @@
.. PierMesh documentation master file, created by
sphinx-quickstart on Fri Jul 26 23:30:55 2024.
PierMesh documentation
======================
.. toctree::
:caption: Contents:
:glob:
run
../ui
../Bubble/*
../Components/*
../Cryptography/*
../Filters/*
../Packets/*
../Transmission/*
../webui/*

35
src/make.bat Normal file
View File

@ -0,0 +1,35 @@
@ECHO OFF
pushd %~dp0
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set SOURCEDIR=source
set BUILDDIR=build
%SPHINXBUILD% >NUL 2>NUL
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.https://www.sphinx-doc.org/
exit /b 1
)
if "%1" == "" goto help
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
goto end
:help
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
:end
popd

4
src/mpify.py Normal file
View File

@ -0,0 +1,4 @@
import src.Components.daisy as d
import sys
d._json_to_msg(sys.argv[1])

9
src/piermesh-mini.ascii Normal file
View File

@ -0,0 +1,9 @@
▄▄▄▄▄▄▄▄
▄▄▄▄▄▄▄▄
▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄
█ █ █ █
█ █ PIERMESH █ █
█ █ █ █
▀ ▀▀▀▀▀▀▀▀▀▀▀▀▀▀ ▀
▀▀▀▀▀▀▀▀
▀▀▀▀▀▀▀▀

21
src/piermesh.ascii Normal file
View File

@ -0,0 +1,21 @@
▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀
▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀
██ ██ ██████████████ ██ ██
██ ██ ██ ██ ██ ██ ██ ██
██ ██ ██████████████ ██ ██
██ ██ ██ ██ ██ ██ ██ ██
██ ██ ██████████████ ██ ██
██ ██ ██ ██ ██ ██ ██ ██
██ ██ ██████████████ ██ ██
▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀
▄▄▄▄▄▄▄▄▄▄▄▄▄▄
▀▀▀▀▀▀▀▀▀▀▀▀▀▀
▄▄▄▄▄▄▄▄▄▄▄▄
▌ PIERMESH ▐
▀▀▀▀▀▀▀▀▀▀▀▀

280
src/run.py Executable file
View File

@ -0,0 +1,280 @@
from meshtastic import logging, os
from Filters.base import Filter
from Bubble.router import Router
from webui.serve import Server
from Transmission.transmission import Transmitter
import asyncio
import sys
import time
import psutil
import logging
import datetime
from Cryptography.DHEFern import DHEFern
from microdot import Request
import traceback
from ui import TUI
import threading, os
if __name__ == "__main__":
# Global objects for the PierMesh service and the TUI so we can terminate the associated processes later
global nodeOb, tuiOb
tuiOb = None
nodeOb = None
# Enable 500 kB files in the webui
Request.max_content_length = 1024 * 1024 * 0.5
Request.max_body_length = 1024 * 1024 * 0.5
# Pull startup parameters
device, webPort, serverInfoFile, delay, nodeNickname = sys.argv[1:]
# Set up file based logging
logPath = "logs"
fileName = datetime.datetime.now().strftime("%m%d%Y_%H%M%S")
logFormat = "%(asctime)s [%(threadName)-12.12s] [%(levelname)-5.5s] %(message)s"
logging.basicConfig(
format=logFormat,
level=logging.INFO,
filename="{0}/{2}_{1}.log".format(logPath, fileName, nodeNickname),
)
class Node:
"""
Class that handles most of the PierMesh data
`🔗 Source <https://git.utopic.work/PierMesh/piermesh/src/branch/main/src/run.py>`_
"""
def __init__(self):
self.toLog = []
"""
We store logs to be processed here
See Also
--------
logPassLoop: Loop to handle logging to file and TUI
"""
actionsList = [f for f in dir(self) if "action" in f]
self.actions = {}
for a in actionsList:
self.actions[a.split("_")[1]] = getattr(self, a)
self.cLog(20, "Past action mapping")
self.r = Router(self.cLog, nfpath=serverInfoFile)
self.cLog(20, "Router initialized")
self.onodeID = str(self.r.serverInfo.get()["nodeID"])
self.catch = self.r.c
self.cache = self.r.cache
self.s = None
self.todo = []
self.f = Filter(self.cache, self.onodeID, self.todo, self.cLog)
self.cLog(20, "Filter initialized")
# self.cLog(30, sys.argv)
self.t = None
self.cLog(20, "Cryptography initializing")
self.cryptographyInfo = DHEFern(self.cache, nodeNickname, self.cLog)
self.cLog(20, "Cryptography initialized")
self.processed = []
self.proc = psutil.Process(os.getpid())
self.mTasks = {}
def cLog(self, priority: int, message: str):
"""
Convenience function that logs to the ui and log files
Parameters
----------
priority: int
Priority of message to be passed to logging
message: str
Message to log
Returns
-------
None
"""
logging.log(priority, message)
self.toLog.append("[{0}]:\n{1}".format(datetime.datetime.now(), message))
async def monitor(self):
"""
Monitor and log ram and cpu usage
"""
while True:
if tuiOb != None:
if tuiOb.done:
print("Terminating PierMesh service...")
self.proc.terminate()
await asyncio.sleep(10)
memmb = self.proc.memory_info().rss / (1024 * 1024)
memmb = round(memmb, 2)
cpup = self.proc.cpu_percent(interval=1)
self.cLog(
20,
" MEM: {0} mB | CPU: {1} %".format(
memmb,
cpup,
),
)
# Set cpu and memory usage in the TUI
tuiOb.do_set_cpu_percent(float(cpup))
tuiOb.do_set_mem(memmb)
async def fListen(self):
"""
Loop to watch for tasks to do
See Also
--------
Filters.base.sieve: Packet filtering/parsing
Notes
-----
We use a common technique here that calls the function from our preloaded actions via dictionary entry
"""
while True:
while len(self.todo) >= 1:
todoNow = self.todo.pop()
action = todoNow["action"]
self.cLog(20, "Action: " + action)
data = todoNow["data"]
await self.actions[action](data)
await asyncio.sleep(1)
async def action_sendToPeer(self, data: dict):
"""
Send data to a peer connected to the server
Parameters
----------
data: dict
Data passed from the filter, this is a generic object so it's similar on all actions here
See Also
--------
Filters.Protocols: Protocol based packet filtering
webui.serve.Server: Runs a light Microdot web server with http/s and websocket functionality
webui.serve.Server.sendToPeer: Function to actually execute the action
"""
self.s.sendToPeer(data["recipient"], data["res"])
async def action_sendCatch(self, data: dict):
"""
Get catch and return the data to a peer
See Also
--------
Bubble.router.Router: Routing class
"""
res = self.r.getCatch(data["head"], data["body"], fins=data["fins"])
self.s.sendToPeer(data["recipient"], res)
async def action_map(self, data: dict):
"""
Map new network data to internal network map
See Also
--------
Bubble.network.Network: Layered graph etwork representation
"""
self.r.n.addLookup(data["onodeID"], data["mnodeID"])
self.cLog(20, "Lookup addition done")
self.r.n.addon(data["onodeID"])
async def action_initNodeDH(self, data: dict):
"""
Initialize diffie hellman key exchange
See Also
--------
Cryptography.DHEFern.DHEFern: End to end encryption functionality
"""
if self.cryptographyInfo.getRecord("key", data["onodeID"]) == False:
await self.t.initNodeDH(
self.cryptographyInfo, int(data["mnodeID"]), data["onodeID"]
)
async def action_keyDeriveDH(self, data: dict):
"""
Derive key via diffie hellman key exchange
"""
try:
self.cryptographyInfo.keyDerive(
data["publicKey"],
self.cryptographyInfo.getSalt(),
data["recipientNode"],
data["params"],
)
except:
self.cLog(30, traceback.format_exc())
async def logPassLoop():
"""
Loop to pass logs up to the TUI
See Also
--------
tui.TUI: TUI implementation
"""
global tuiOb, nodeOb
while True:
await asyncio.sleep(1)
if tuiOb == None or nodeOb == None:
await asyncio.sleep(1)
elif tuiOb.done == True:
print("Terminating PierMesh service...")
nodeOb.proc.terminate()
else:
ctoLog = [l for l in nodeOb.toLog]
for l in ctoLog:
tuiOb.do_write_line(l)
nodeOb.toLog.pop()
async def main():
"""
Main method for running the PierMesh service
"""
global nodeOb
try:
n = Node()
nodeOb = n
nodeOb.cLog(20, "Starting up")
nodeOb.cLog(20, "Staggering {0} seconds, please wait".format(sys.argv[4]))
time.sleep(int(sys.argv[4]))
n.t = Transmitter(
sys.argv[1], n.f, n.onodeID, n.cache, n.catch, n.cryptographyInfo, n.cLog
)
n.s = Server(n.t, n.catch, n.onodeID, n.r.n, n.cLog)
n.mTasks["list"] = asyncio.create_task(n.fListen())
await asyncio.sleep(1)
n.mTasks["pct"] = asyncio.create_task(n.t.progressCheck())
await asyncio.sleep(1)
n.mTasks["mon"] = asyncio.create_task(n.monitor())
await asyncio.sleep(1)
n.mTasks["announce"] = asyncio.create_task(n.t.announce())
await asyncio.sleep(1)
await n.s.app.start_server(port=int(sys.argv[2]), debug=True)
except KeyboardInterrupt:
sys.exit()
if __name__ == "__main__":
try:
t = threading.Thread(target=asyncio.run, args=(main(),))
t.start()
lplt = threading.Thread(target=asyncio.run, args=(logPassLoop(),))
lplt.start()
tuiOb = TUI()
tuiOb.nodeOb = nodeOb
tuiOb.run()
except KeyboardInterrupt:
nodeOb.cLog(30, "Terminating PierMesh service...")
except Exception:
nodeOb.cLog(30, sys.gettrace())

6
src/run.rst Normal file
View File

@ -0,0 +1,6 @@
PierMesh service runner
=======================
.. automodule:: run.main
.. autoclass:: run.Node
:members:

1
src/scripts/falin Executable file
View File

@ -0,0 +1 @@
python run.py /dev/ttyACM1 5000 server2.info 0 falin

1
src/scripts/falin.pypy Executable file
View File

@ -0,0 +1 @@
./tmp/pypy3.10-v7.3.16-linux64/bin/pypy3.10 run.py /dev/ttyACM1 5000 server2.info 0 falin

1
src/scripts/marcille Executable file
View File

@ -0,0 +1 @@
python run.py /dev/ttyACM0 5001 server1.info 30 marcille

1
src/scripts/setregion Executable file
View File

@ -0,0 +1 @@
python -m meshtastic --set lora.region US --port /dev/ttyACM0

66
src/stale/microplane.py Executable file
View File

@ -0,0 +1,66 @@
import base64, uuid, json, sys, lzma, bson, random, msgpack
from Packets import Packet, HeaderPacket
def compressPackets(packets):
cpackets = []
for packet in packets:
cpacket = lzma.compress(packet)
cpackets.append(cpacket)
return cpackets
def decompressPackets(packets):
cpackets = []
for packet in packets:
cpacket = lzma.decompress(packet)
cpackets.append(cpacket)
return cpackets
# TODO: Sub packets
# Sub packets are indicated by a flag in the header
# TODO: Sub packets implementation
# TODO: Sub packet recursion, collapse
# TODO: Determine sub packet per subpackets allowed for header packet size, done, 5, but user ids and subpackets ids must be 6 digit integers
# Remove duplicate references to objects
# Local db that stores users to lookup for less metadata, daisy
# IDS MUST BE 6 DIGITS
# location prefix added by node
# Check if packet is header by checking if it has sender_id
# DO NOT CHANGE DATA SIZE UNLESS YOU KNOW WHAT YOURE DOING
# Moved to Packets/Packets
def reassemblePackets(packets):
#print(packets)
packet_count = msgpack.loads(packets[0])["packet_count"]
#print("Packet count")
#print(packet_count)
positions = []
for packet in packets:
p = msgpack.loads(packet)
num = 0
if "packet_number" in p:
num = p["packet_number"]
#print(p)
positions.append(num)
tpackets = []
for it in range(0, len(positions)):
tpackets.append(packets[positions.index(it)])
packets = tpackets
res = b""
#print("Reassembling")
#print(len(packets))
for it in range(len(packets)):
if it > 0:
#print(it)
#print(res)
#print(bson.loads(packets[it]).keys())
res = res + lzma.decompress(msgpack.loads(packets[it])["data"])
#print(len(res))
#print(bson.loads(res))
#print(res)
#print(bson.loads(res))
return msgpack.loads(res)
def raiseSizeError(index):
raise ValueError("Field of index: " + str(index) + " too big, maximum is 200 bytes")

30
src/stale/sio.py Executable file
View File

@ -0,0 +1,30 @@
import socketio
from threading import Thread
import transmission as t
import microplane as m
import bson
@sio.event
def send(sid, data):
packets = bson.dumps(data)
packets = m.getPackets(packets)
# TODO: Implement awaitFullResponse
threads = []
threads.append(Thread(target=t.awaitFullResponse, args=[pid])
for p in packets:
t.send(interface, p)
#awaitResponse(cpid)
pid = t.cpid threads.append(Thread(target=t.awaitResponse, args=[pid])
threads[-1].start()
time.sleep(1)
done = True
while True:
for th in threads:
if thread.is_alive():
done = False
if done:
break
sio.emit('done', {'data': t.messages[packets[0]["packets_id"]]["fresponse"]}, room=sid)
mgr = socketio.RedisManager('redis://')
sio = socketio.Server(client_manager=mgr)

193
src/stale/transmission.old.py Executable file
View File

@ -0,0 +1,193 @@
import meshtastic
import meshtastic.serial_interface
from pubsub import pub
import bson
import microplane as m
import sys
import time
import asyncio
import functools
from util import sendData
from threading import Thread
import webview
from Daisy import Catch, Cache
from Filters import Base
tcache = Cache()
tcatch = Catch()
html = False
notConnected = True
messages = {}
acks = {}
# Be careful with this
cpid = 0
# TODO: Filter out non data packets/log them, DONE
# TODO: Figure out why the message count is resetting, DONE, the while loop was...looping
# TODO: Sending packets across multiple nodes/load balancing/distributed packet transmission/reception
def onReceive(packet, interface):
Base.sieve(packet)
tcache.refresh()
def onConnection(interface, topic=pub.AUTO_TOPIC): # called when we (re)connect to the radio
# defaults to broadcast, specify a destination ID if you wish
interface.sendData("connect".encode("utf-8"))
global notConnected
notConnected = False
def responseCheck(packet):
#print("got response")
#print("Checking")
#print(packet["decoded"])
rid = packet["decoded"]["requestId"]
print(rid)
# TODO: Set to false on error
print(packet["decoded"])
if (packet["decoded"]["routing"]["errorReason"] == "MAX_RETRANSMIT"):
print("Got ack error")
acks[str(rid)] = False
else:
acks[str(rid)] = True
# TODO: Threaded send nethod
def send(interface, packet):
global cpid
# TODO: Set to confirm receipt, DONE
# TODO: Async sendData call
# TODO: Fix logging error
print("sending")
pid = interface.sendData(packet, wantAck=True, onResponse=responseCheck)
#pid = await sendData(interface, packet, wantAck=True, onResponse=responseCheck)
#pid = await iloop.run_in_executor(None, functools.partial(interface.sendData, interface, packet, wantAck=True, onResponse=responseCheck))
# Can I use waitForAckNak on cpid?
cpid = pid.id
print(cpid)
#return pid
return True
def awaitResponse(pid):
#pid = interface.sendData(p, wantAck=True, onResponse=responseCheck)["id"]
#pid = await loop.run_in_executor(None, send, interface, p)
#pid = await loop.run_in_executor(None, functools.partial(interface.sendData, wantAck=True, onResponse=responseCheck), interface, p)
print(pid)
for i in range(1_000_000_000):
time.sleep(5)
if str(pid) in acks:
print("Response gotten")
break
print("waiting")
return True
def awaitFullResponse(pid):
for i in range(1_000_000_000):
time.sleep(5)
if pid in messages.keys():
if messages[pid]["finished"]:
print("Response gotten")
break
print("waiting")
return True
pub.subscribe(onReceive, "meshtastic.receive")
pub.subscribe(onConnection, "meshtastic.connection.established")
# By default will try to find a meshtastic device, otherwise provide a device path like /dev/ttyUSB0
interface = meshtastic.serial_interface.SerialInterface(sys.argv[-1])
# Wait to connect to partner
# TODO: use node id to delivery directly
while notConnected:
time.sleep(5)
print("Waiting")
if "0" in sys.argv[-1]:
tj = [[{"message":"free palestine! free all colonized people!"}], ["the people yearn for freedom"]]
j2 = {"message":"free palestine! free all colonized people!", "message2":"free palestine! free all colonized people!"}
htmlj = {"html": "<h1>Hello world!</h1>"}
htmljl = {"html": ""}
with open("test.html", "r") as f:
htmljl["html"] = f.read()
done = False
threads = {}
for p in m.getPackets(bson.dumps(htmljl), 600123, 600123, 600124):
print(sys.getsizeof(p))
#send_thread = Thread(target=send, args=[interface, p])
#send_thread.start()
send(interface, p)
#awaitResponse(cpid)
await_thread = Thread(target=awaitResponse, args=[cpid])
await_thread.start()
cth = {
"ob": await_thread,
"pid": str(cpid),
"packet": p,
"retry": False
}
threads[str(cpid)] = cth
# TODO: see if theres a better way to do this
time.sleep(10)
#await_thread.join()
#await_thread.join()
#loop = asyncio.new_event_loop()
#loopi = asyncio.new_event_loop()
#loopi.run_until_complete(send(interface, p))
#res = loop.run_until_complete(awaitResponse(cpid))
# figure out why it doesnt send before timing out
# TODO: running in different threads
#pid = send(interface, p).id
#loop = asyncio.new_event_loop()
#loopi = asyncio.new_event_loop()
#loopi.run_until_complete(send(loopi, interface, p))
#interface.waitForAckNak()
#res = loop.run_until_complete(awaitResponse(interface, p, cpid))
#print("Done waiting")
#interface.waitForAckNak()
# DO NOT RUN UNTIL responseCheck CHECKS FOR ERRORS
isDone = False
while not isDone:
doneFlag = True
for th in threads.keys():
th = threads[th]
if not th["ob"].is_alive:
if not acks[th["pid"]]:
retry = th["retry"]
if retry == False:
retry = 1
elif retry < 3:
retry += 1
else:
print("Too many retries")
break
doneFlag = False
send(interface, th["packet"])
await_thread = Thread(target=awaitResponse, args=[cpid])
await_thread.start()
cth = {
"ob": await_thread,
"pid": str(cpid),
"packet": p
}
cth["retry"] = retry
threads[str(cpid)] = cth
# TODO: see if theres a better way to do this
time.sleep(5)
if doneFlag:
isDone = True
for it, p in enumerate(m.getPacketsFromFile("r.jpg", 600123, 600123, 600124)):
#send(interface, p)
#pid = send(interface, p).id
#loopi = asyncio.new_event_loop()
#loopi.run_until_complete(send(loopi, interface, p))
#interface.waitForAckNak()
#res = loop.run_until_complete(awaitResponse(interface, p, cpid))
#interface.waitForAckNak()
#print("Sending packet: " + str(it))
break
else:
while True:
if html != False:
break
pass
webview.create_window('Home', html=html)
webview.start()

129
src/ui.py Normal file
View File

@ -0,0 +1,129 @@
from textual.app import App, ComposeResult
from textual.widgets import Log, Label, Footer, Header, ProgressBar
from textual.binding import Binding
from textual.containers import Horizontal, Vertical
import sys, os
global nodeOb
nodeOb = None
class TUI(App):
"""
TUI for PierMesh
`🔗 Source <https://git.utopic.work/PierMesh/piermesh/src/branch/main/src/ui.py>`_
Attributes
----------
visibleLogo: bool
Whether the logo is visible or not, used in toggling visibility
nodeOb: Node
Reference to the Node running the PierMesh service
done: bool
Whether the TUI has been killed
"""
visibleLogo = True
nodeOb = None
done = False
CSS_PATH = "ui.tcss"
BINDINGS = [
Binding(key="q", action="quitFull", description="Quit the app", show=True),
Binding(
key="f",
action="toggleFullscreen",
description="Full screen the logs",
show=True,
),
]
def action_toggleFullscreen(self):
"""
Toggle fullscreen logs by either collapsing width or setting it to it's original size
"""
if self.visibleLogo:
self.query_one("#logo").styles.width = 0
else:
self.query_one("#logo").styles.width = "50%"
self.visibleLogo = not self.visibleLogo
def action_quitFull(self):
"""
Kill the whole stack by setting self to done and terminating the thread. We check in run.monitor later and kill the rest of the stack then with psutil
See Also
--------
run.monitor
"""
self.done = True
sys.exit("Terminating TUI...")
def compose(self) -> ComposeResult:
"""
Build the TUI
"""
ascii = ""
with open("piermesh-mini.ascii", "r") as f:
ascii = f.read()
"""
Load the ascii art for display on the left label
"""
yield Header(icon="P")
yield Label(ascii, classes="largeLabel", name="logo", id="logo")
yield Vertical(
Log(auto_scroll=True, classes="baseLog"),
Label("CPU usage:", name="cpul", id="cpul"),
ProgressBar(show_eta=False, show_percentage=True),
Label("MEM usage: ", name="meml", id="meml"),
)
yield Footer()
def do_write_line(self, logLine: str):
"""
Write line to the logs panel
Parameters
----------
logLine: str
Line to log
"""
log = self.query_one(Log)
log.write_line(logLine)
def do_set_cpu_percent(self, percent: float):
"""
Set CPU percent in the label and progress bar
Parameters
----------
percent: float
Percent of the cpu PierMesh is using
"""
self.query_one("#cpul").update("CPU usage: {0} %".format(str(percent)))
pbar = self.query_one(ProgressBar)
pbar.progress = percent
def do_set_mem(self, memmb: float):
"""
Set memory usage label in the ui
Parameters
----------
memmb: float
Memory usage of PierMesh in megabytes
"""
self.query_one("#meml").update("MEM usage: {0} mB".format(str(memmb)))
def on_mount(self):
"""
Called at set up, configures the title and the progess bar
"""
self.title = "PierMesh TUI"
self.query_one(ProgressBar).update(total=100)
self.query_one(ProgressBar).update(progress=0)
if __name__ == "__main__":
app = TUI()
app.run()

5
src/ui.rst Normal file
View File

@ -0,0 +1,5 @@
TUI application
==========================
.. autoclass:: ui.TUI
:members:

23
src/ui.tcss Normal file
View File

@ -0,0 +1,23 @@
Screen {
layout: horizontal;
scrollbar-size: 0 0;
}
.largeLabel {
width: 40%;
}
.baseLog {
height: 80%;
scrollbar-background: $primary-background;
scrollbar-corner-color: $primary-background;
scrollbar-color: green;
scrollbar-size: 0 1;
}
ProgressBar {
width: 50%;
}
Bar > .bar--bar {
color: green;
}

40
src/webui/build.py Executable file
View File

@ -0,0 +1,40 @@
from jinja2 import Environment, FileSystemLoader, select_autoescape
import os, markdown2
import json, msgpack, subprocess
import shutil
from distutils.dir_util import copy_tree
env = Environment(loader=FileSystemLoader("templates"))
# subprocess.check_call("mmdc -i * -e png")
# TODO: Generating mmd from docstrings
for path in os.listdir("diagrams/markdown"):
fname = path.split(".")[0]
try:
subprocess.check_call(
"mmdc -i diagrams/markdown/{0} -o res/img/diagrams/{1}.png".format(
path, fname
),
shell=True,
)
except Exception as e:
print("Empty file or other error")
copy_tree("diagrams/markdown", "res/diagrams")
copy_tree("res", "build/res")
shutil.copyfile("htmx-extensions/src/ws/ws.js", "build/res/js/ws.js")
tpath = "templates/"
for path in os.listdir(tpath):
if ("base" in path) != True:
for t in os.listdir(tpath + path):
if os.path.exists("build/" + path) != True:
os.makedirs("build/" + path)
ipath = tpath + path + "/" + t
template = env.get_template(path + "/" + t)
with open("build/{0}/{1}".format(path, t), "w") as f:
f.write(template.render())

View File

@ -0,0 +1,13 @@
---
title: "🔵 Bubble"
---
erDiagram
"👥 Peer" |{..o| "🗄️ Server" : "🔌 WS"
"👥 Peer" |{..o| "🗄️ Server": "📄 HTTP/S"
"🗄️ Server" |o..o| "📤 Transmitter": "❔ Queries"
"📤 Transmitter" |o..o| "📻 PierMesh": "📻 send"
"📤 Transmitter" |o..o| "📻 PierMesh": "📻 onReceive"
"📤 Transmitter" |o..o| "🧽 Sieve": "📻 onReceive"
"🧽 Sieve" |o..o| "💿 Cache": " Write"
"💿 Cache" |o..o| "👂 fListen": " Write event"
"👂 fListen" |o..o| "🗄️ Server": "✉️ Pass message"

View File

@ -0,0 +1,16 @@
---
title: "🐟 Catch"
---
erDiagram
"👥 Peer" |{..o| "🗄️ Server": "🔌 WS"
"👥 Peer" |{..o| "🗄️ Server": "📄 HTTP/S"
"🗄️ Server" |o..o| "🐟 Catch": "❔ Queries"
"📻 PierMesh" |o..o| "🧽 Sieve": "🧽 Filters"
"🧽 Sieve" |o..o| "👂 fListen": "👂 Listens for messages"
"👂 fListen" |o..o| "🐟 Catch": "❔ Queries"
"🐟 Catch" |o..}| "🌼 Daisy": "📄 Store references"
"🌼 Daisy" {
string filepath
dictionary msg
}
"🌼 Daisy" |o..o| "📁 File system": "📁 CRUD"

View File

@ -0,0 +1,5 @@
---
title: "🌼 Daisy"
---
erDiagram

View File

@ -0,0 +1,20 @@
---
title: "📻 PierMesh"
---
erDiagram
"👥 Peer" }|..|{ "🗄Server" : "🔌 WS"
"👥 Peer" }|..|{ "🗄Server": "📄 HTTP/S"
"🗄Server" |o..o| "🐟 Catch": "❔ Queries"
"🗄Server" |o..o| "💿 Cache": "❔ Queries"
"🗄Server" |o..o| "📤 Transmitter": "❔ Queries"
"🐟 Catch" |o..o| "📤 Transmitter": "❔ Queries"
"🐟 Catch" |o..o| "👥 Peer": "🔌 WS"
"🐟 Catch" |o..o| "📤 Transmitter": "✉️ Sync packet"
"💿 Cache" |o..o| "📤 Transmitter": "✉️ Sync packet"
"👂 fListen" |o..o| "💿 Cache": "👂 Listen for completed messages"
"👂 fListen" |o..o| "👥 Peer": "🔌 WS"
"📤 Transmitter" |o..o| "🔽 onReceive": "✉️ Get packet"
"🔽 onReceive" |o..o| "🧽 Sieve": "🧽 Filter packet"
"🧽 Sieve" |o..o| "💿 Cache": " Push completed messages"
"📤 Transmitter" |o..o| "📻 PierMesh": "📻 Send"
"📻 PierMesh" |o..o| "🔽 onReceive": "🔽 Receive"

View File

Some files were not shown because too many files have changed in this diff Show More