Trans: Latin prefix implying "across" or "Beyond", often used in gender nonconforming situations – Scend: Archaic word describing a strong "surge" or "wave", originating with 15th century english sailors – Survival: 15th century english compound word describing an existence only worth transcending.

Category: Uncategorized

Bits & Bobs, Mushstools & Toadrooms

...Despite being a chilly & wintery March up here in the White Mountains, there is no shortage of fun birds and exciting projects!

Merlin AI pipeline for Mushroom identification!

It's happening, and it is going to be awesome YMMV, but YOLO:

Image-based mushroom identification

Artifacts:

Dataset: images.tgz train.tgz test.tgz
Annotator json: images.json categories.json config.json
@ai.columbari.us: Web Annotator! mo_example_task.tar.gz mo_example_task.zip

Continue reading

Chindōgu ASCII art

A ridiculous Chindōgu utility prompt & CLI for fetching private releases & files from GitHub & BitBucket

  • Fetch, unpack, extract specific releases & files or a complete master branch from a private GitHub repo with an api access token
  • Fetch and extract specific files or complete branches from a private BitBucket account with user's git authentication
  • Prefill default prompt values with a variety of console flags
  • Save & load default prompt values with a file of environment variables, see templates FetchReleasegSampleEnv_GitHub, FetchFilegSampleEnv_BitBucket, FetchEverythingSampleEnv_BitBucket, FetchEverythingSampleEnv_GitHub; pass as an argument with the -e flag, (./LeafletSync -e YourEnvFile) or provide one on launch.
curl https://raw.githubusercontent.com/Jesssullivan/LeafletSync/main/LeafletSync --output LeafletSync && chmod +x LeafletSync && ./LeafletSync

Continue reading

naive distance measurements with opencv

Knowing both the Field of View (FoV) of a camera's lens and the dimensions of the object we'd like to measure (Region of Interest, ROI) seems like more than enough to get a distance.

Note, opencv has an extensive suite of actual calibration tools and utilities here.

...But without calibration or much forethought, could rough measurements of known objects even be usable? Some notes from a math challenged individual:

# clone:
git clone https://github.com/Jesssullivan/misc-roi-distance-notes && cd misc-roi-distance-notes

Most webcams don't really provide a Field of View much greater than ~50 degrees- this is the value of a MacBook Pro's webcam for instance. Here's the plan to get a Focal Length value from Field of View:

So, thinking along the lines of similar triangles:

  • Camera angle forms the angle between the hypotenuse side (one edge of the FoV angle) and the adjacent side
  • Dimension is the opposite side of the triangle we are using to measure with.
  • ^ This makes up the first of two "similar triangles"
  • Then, we start measuring: First, calculate the opposite ROI Dimension using the arbitrary Focal Length value we calculated from the first triangle- then, plug in the Actual ROI Dimensions.
  • Now the adjacent side of this ROI triangle should hopefully be length, in the the units of ROI's Actual Dimension.

source a fresh venv to fiddle from:

# venv:
python3 -m venv distance_venv
source distance_venv/bin/activate

# depends are imutils & opencv-contrib-python:
pip3 install -r requirements.txt

The opencv people provide a bunch of prebuilt Haar cascade models, so let's just snag one of them to experiment. Here's one to detect human faces, we've all got one of those:

mkdir haar
wget https://raw.githubusercontent.com/opencv/opencv/master/data/haarcascades/haarcascade_frontalface_alt2.xml  -O ./haar/haarcascade_frontalface_alt2.xml

Of course, an actual thing with fixed dimensions would be better, like a stop sign!

Let's try to calculate the distance as the difference between an actual dimension of the object with a detected dimension- here's the plan:

YMMV, but YOLO:

# `python3 measure.py`
import math
from cv2 import cv2

DFOV_DEGREES = 50  # such as average laptop webcam horizontal field of view
KNOWN_ROI_MM = 240  # say, height of a human head  

# image source:
cap = cv2.VideoCapture(0)

# detector:
cascade = cv2.CascadeClassifier('./haar/haarcascade_frontalface_alt2.xml')

while True:

    # Capture & resize a single image:
    _, image = cap.read()
    image = cv2.resize(image, (0, 0), fx=.7, fy=0.7, interpolation=cv2.INTER_NEAREST)

    # Convert to greyscale while processing:
    gray_conv = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    gray = cv2.GaussianBlur(gray_conv, (7, 7), 0)

    # get image dimensions:
    gray_width = gray.shape[1]
    gray_height = gray.shape[0]

    focal_value = (gray_height / 2) / math.tan(math.radians(DFOV_DEGREES / 2))

    # run detector:
    result = cascade.detectMultiScale(gray)

    for x, y, h, w in result:

        dist = KNOWN_ROI_MM * focal_value / h
        dist_in = dist / 25.4

        # update display:
        cv2.rectangle(image, (x, y), (x + w, y + h), (255, 0, 0), 2)
        cv2.putText(image, 'Distance:' + str(round(dist_in)) + ' Inches',
                    (5, 100), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), 2)
        cv2.imshow('face detection', image)

        if cv2.waitKey(1) == ord('q'):
            break

run demo with:

python3 measure.py

-Jess

Client-side, asynchronous HTTP methods- TypeScript

...Despite the ubiquitousness of needing to make a POST request from a browser (or, perhaps for this very reason) there seems to be just as many ways, methods, libraries, and standards of implementing http functions in JavaScript as there are people doing said implementing. Between the adoption of the fetch api in browsers and the prevalence and power of Promises in JS, asynchronous http needn't be a hassle!

/*
...happily processing some data in a browser, when suddenly...
....panik!
you need to complete a portion of this processing elsewhere on some server...:
*/

Continue reading

…Ever tried to Chrome Remote ➡️ Ubuntu Budgie?

Check out this project on my Github over here 🙂

Fully automated patching for Chrome Remote Desktop on Ubuntu Budgie.

Chrome remote desktop is fantastic, but often clashes with Xorg nuances from a variety of desktop environments in Ubuntu. This chrome-remote-desktop script extends and replaces the version automatically installed by Google in /opt/google/chrome-remote-desktop/chrome-remote-desktop. This stuff is only relevant for accessing your Ubuntu machine from elsewhere (e.g. the "server", the client machine should not be installing anything, all it needs is a web browser).

Set up the server:

Before patching anything or pursuing other forms of delightful tomfoolery, follow the installation instructions provided by Google. Set up everything normally- install Google's .deb download with dpkg, set up a PIN, etc.
The trouble comes when you are trying to remote in- some problems you may encounter include:

  • none of the X sessions work, each immediately closing the connection to the client
  • the remote desktop environment crashes or becomes mangled
  • odd scaling issues or flaky resolution changes

Patch it up:

# get this script:
# wget https://raw.githubusercontent.com/Jesssullivan/chrome-remote-desktop-budgie/master/chrome-remote-desktop

# or:
git clone https://github.com/Jesssullivan/chrome-remote-desktop-budgie/ 
cd chrome-remote-desktop-budgie 

# behold:
python3 chrome-remote-desktop

# ...perhaps, if you are keen (optional):
sudo chmod u+x addsystemd.sh
sudo ./addsystemd.sh

What does this do?

We are primarily just enforcing the use of existing instances of X and correct display values as reported by your system.

  • This version keeps a persistent version itself in /usr/local/bin/ in addition updating the one executed by Chrome in /opt/google/chrome-remote-desktop/.
  • A mirror of this script is also maintained at /usr/local/bin/chrome-remote-desktop.github, and will let the user know if there are updates.
  • The version distributed by google is retained in /opt/ too as chrome-remote-desktop.verbatim.
  • Each of these versions are compared by md5 hash- this way our patched version of chrome-remote-desktop will always make sure it is where it should be, even after Google pushes updates and overwrites everything in /opt/.

This, That, etc

....Updated 07/19/2020

Bits & bobs, this & that of late:

...In effort to thwart the recent heat and humidity here in the White Mountains (or, perhaps just to follow the philosophy of circuitous overcomplication... 🙂 ) here are some sketches of quick-release exhaust fittings of mine for a large, wheeled AC & dehumidifier unit (these have been installed throughout my home via window panels).

...Sketching out a severely overcomplicated "computer shelf", rapid-fab style:
(plasma cut / 3d printed four-post server rack == RepRapRack?? xD) 🙂

...Also, Ryan @ V1Engineering recently released his new MPCNC Primo here, should anyone be keen. Long Live the MPCNC! 🙂

...Oodles of fun everyday over in the clipi project- check it out!

xD

clipi CLI!

Find this project on my github here!

...post updated 07/19/2020

An efficient toolset for Pi devices

Emulate, organize, burn, manage a variety of distributions for Raspberry Pi.


Choose your own adventure....

Emulate:
clipi virtualizes many common sbc operating systems with QEMU, and can play with both 32 bit and 64 bit operating systems.

  • Select from any of the included distributions (or add your own to /sources.toml!) and clipi will handle the rest.

Organize:
clipi builds and maintains organized directories for each OS as well a persistent & convenient .qcow2 QEMU disk image.

  • Too many huge source .img files and archives? clipi cleans up after itself under the Utilities... menu.
  • additional organizational & gcc compilation methods are available in /kernel.py

Write:
clipi burns emulations to external disks! Just insert a sd card or disk and follow the friendly prompts. All files, /home, guest directories are written out.

  • Need to pre-configure (or double check) wifi? Add your ssid and password to /wpa_supplicant.conf and copy the file to /boot in the freshly burned disk.
  • Need pre-enabled ssh? copy /ssh to /boot too.
  • clipi provides options for writing from an emulation's .qcow2 file via qemu...
  • ...as well as from the source's raw image file with the verbatim argument

Manage:
clipi can find the addresses of all the Raspberry Pi devices on your local network.

  • Need to do this a lot? clipi can install itself as a Bash alias (option under the Utilities... menu, fire it up whenever you want.

Shortcuts:

Shortcuts & configuration arguments can be passed to clipi as a .toml (or yaml) file.

  • Shortcut files access clipi's tools in a similar fashion to the interactive menu:
# <shortcut>.toml
# you can access the same tools and functions visible in the interactive menu like so:
'Burn a bootable disk image' = true  
# same as selecting in the interactive cli
'image' = 'octoprint'
'target_disk' = 'sdc'  
  • clipi exposes many features only accessible via configuration file arguments, such as distribution options and emulation settings.
# <shortcut>.toml

# important qemu arguments can be provided via a shortcut file like so:
'kernel' = "bin/ddebian/vmlinuz-4.19.0-9-arm64"
'initrd' = "bin/ddebian/initrd.img-4.19.0-9-arm64"

# qemu arguments like these use familiar qemu lexicon:
'M' = "virt" 
'm' = "2048"

# default values are be edited the same way:
'cpu' = "cortex-a53"
'qcow_size' = "+8G"
'append' = '"rw root=/dev/vda2 console=ttyAMA0 rootwait fsck.repair=yes memtest=1"'

# extra arguments can be passed too:
'**args' = """
-device virtio-blk-device,drive=hd-root \
-no-reboot -monitor stdio
"""

# additional network arguments can be passed like so:
# (clipi may automatically modify network arguments depending on bridge / SLiRP settings)
'network' = """
-netdev bridge,br=br0,id=net0 \
-device virtio-net-pci,netdev=net0
"""
  • Supply a shortcut file like so:
    python3 clipi.py etc/find_pi.toml

  • take a look in /etc for some shortcut examples and default values

TODOs & WIPs:

bridge networking things:

  • working on guest --> guest, bridge --> host, host only mode networking options.
    as of 7/17/20 only SLiRP user mode networking works,
    see branch broken_bridge-networking
    to see what is currently cooking here

kernel stuff:

  • automate ramdisk & kernel extraction-
    most functions to do so are all ready to go in /kernel.py

  • other random kernel todos-

    • working on better options for building via qemu-debootstrap from chroot instead of debian netboot or native gcc
    • add git specific methods to sources.py for mainline Pi linux kernel
      • verify absolute binutils version
      • need to get cracking on documentation for all this stuff

gcp-io stuff:

  • formalize ddns.py & dockerfile

  • make sure all ports (22, 80, 8765, etc) can up/down as reverse proxy

# clone:
git clone https://github.com/Jesssullivan/clipi
cd clipi

# preheat:
pip3 install -r requirements.txt
# (or pip install -r requirements.txt)

# begin cooking some Pi:
python3 clipi.py

Parse fdisk -l in Python

fdisk -l has got to be one of the more common disk-related commands one might use while fussing about with raw disk images. The fdisk utility is ubiquitous across linux distributions (also brew install gptfdisk and brew cask install gdisk, supposedly). The -l argument provides a quick look raw sector & file system info. Figuring out the Start, End, Sectors, Size, Id, Format of a disk image's contents without needing to mount it and start lurking around is handy, just the sort of thing one might want to do with Python. Lets write a function to get these attributes into a dictionary- here's mine:

import subprocess
import re

def fdisk(image):

    #  `image`, a .img disk image:
    cmd = str('fdisk -l ' + image)

    # read fdisk output- everything `cmd` would otherwise print to your console on stdout
    # is instead piped into `proc`.
    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)

    # the raw stuff from stdout is not parseable as is, so we read into a string:
    result = proc.stdout.read().__str__()

    # figure out what type we should iterate with when looking via file / part contained within image.  I have no idea if anything besides .img will work- YMMV, but YOLO xD
    if '.iso' in result:
        iter = '.iso'
    if '.qcow2' in result:
        iter = '.qcow2'
    else:  
        iter = '.img'

    # chop up fdisk results by file / partition-
    # the resulting `parts` are equivalent to fdisk "rows" in the shell
    parts = re.findall(r'' + iter + r'\d', result)

    # dictionary `disk` contains each "row" from `parts`:
    disk = {}
    for p in parts:
        # sub dictionary 'part' contains the handy fdisk output values:
        part = {}
        # get just the number words with regex sauce:
        line = result.split(p)[1]
        words = re.split(r'\s+', line)
        # place each word into 'part':
        part['Start'] = words[1]
        part['End'] = words[2]
        part['Sectors'] = words[3]
        part['Size'] = words[4]
        part['Id'] = words[5]
        part['Format'] = words[6].split('\\n')[0]
        # stick this part into 'disk', move onto next disk part:
        disk[p] = part
    return disk

Dover’s Enclosure

A stylish demo enclosure for the Xilinx / Digilent Genesys 2 with a display panel.

Check out what Dover Microsystems is up to here:
https://www.dovermicrosystems.com/

Prototyping & production @ the D&M Makerspace- see what else we're up to:
https://makerspace.plymouthcreate.net/

Electronics:

FPGA- Digilent Xilinx Genesys 2 FPGA Reference

The display and HDMI driver board from pimoroni-
The sketch for panel dimensions are shared over here too

BOM for version 6:

You can find the V6 interactive Fusion 360 model over here

...additional V6 svg, stl layouts on tinkercad

Materials:

Size Type
12"x12" 1/4" (6.35mm) clear acrylic sheet
12"x12" 3mm clear acrylic sheet
12"x12" 3mm colored acrylic sheet
~45 grams printer plastic (filament or resin)

Hardware:

Qty Size
3 m3x8
3 m3x18
1 m3x20
7 m3 nut
2 m2x14
2 m2x10
4 m2 nut

What is this thing?

*"We use the FPGA to prototype / emulate a "Soft Core" CPU with and without Dover's IP (logic) called CoreGuard.
An FPGA can simulate (sometimes called "emulate") logical circuits, and is reprogrammable. So you can design circuitry that eventually will be fabricated in silicon, but you can work out bugs and try different designs using the FPGA "fabric".*

*For demos, we synthesize to the Xilinx FPGA: a design for a RISC-V CPU, a simple UART (serial interface), an interface to the on-board DDR memory and flash memory, and a simple video output. We put some software in the on-board flash, then boot a working RISC-V system. We'll show how the software can be attacked, using I/O over the serial port to mimic what would typically take place over a network connection. Next, we show the same SoC (CPU + UART + memory) with CoreGuard logic added in. We run the same software and then show that the same attack is blocked by CoreGuard. We also use the FPGA to emulate the Arm CPU that we are interfacing with for our NXP customer."*

Install Adobe Applications on AWS WorkSpaces

By default, the browser based authentication used by Adobe’s Creative Cloud installers will fail on AWS WorkSpace instances. Neither the installer nor Windows provide much in the way of useful error messages- here is how to do it!

Open Server Manager. Under “Local Server”, open the “Internet Explorer Enhanced Security Configuration”- *(mercy!)* - and turn it off.

Good Lord

##### Tada! The sign on handoff from the installer→Browser→ back to installer will now work fine. xD

Convert .heic –> .png

on github here, or just get this script:

wget https://raw.githubusercontent.com/Jesssullivan/misc/master/etc/heic_png.sh

Well, following the current course of Apple’s corporate brilliance, iOS now defaults to .heic compression for photos.

Hmmm.

Without further delay, let's convert these to png, here from the sanctuary of Bash in ♡Ubuntu Budgie♡.

Libheif is well documented here on Github BTW

#!/bin/bash
# recursively convert .heic to png
# by Jess Sullivan
#
# permiss:
# sudo chmod u+x heic_png.sh
#
# installs heif-convert via ppa:
# sudo ./heic_png.sh
#
# run as $USER:
# ./heic_png.sh

command -v heif-convert >/dev/null || {

  echo >&2 -e "heif-convert not intalled! \nattempting to add ppa....";

  if [[ $EUID -ne 0 ]]; then
     echo "sudo is required to install, aborting."
     exit 1
  fi

  add-apt-repository ppa:strukturag/libheif
  apt-get install libheif-examples -y
  apt-get update -y

  exit 0

  }

# default behavior:

for fi in *.heic; do

  echo "converting file: $fi"

  heif-convert $fi $fi.png

 # FWIW, convert to .jpg is faster if png is not required 
 # heif-convert $fi $fi.jpg

  done

ppe & whatnot

Yep, we too are busy cooking up protective medical devices.......

¯_(ツ)_/¯

& whatnot:

Prototyping bits, bobs for an ada motorsports startup-

ADA auto prototyping

Fast Pi camera stand sketch:

Quick pass at a low friction filament spool holder for some very fragile materials:

Some GDAL shell macros from R instead of rgdal

also here on github

it's not R sacrilege if nobody knows

Even the little stuff benefits from some organizational scripting, even if it’s just to catalog one’s actions. Here are some examples for common tasks.

Get all the source data into a R-friendly format like csv. ogr2ogr has a nifty option -lco GEOMETRY=AS_WKT (Well-Known-Text) to keep track of spatial data throughout abstractions- we can add the WKT as a cell until it is time to write the data out again.

# define a shapefile conversion to csv from system's shell:
sys_SHP2CSV <- function(shp) {
  csvfile <- paste0(shp, '.csv')
  shpfile <-paste0(shp, '.shp')
  if (!file.exists(csvfile)) {
    # use -lco GEOMETRY to maintain location
    # for reference, shp --> geojson would look like:
    # system('ogr2ogr -f geojson output.geojson input.shp')
    # keeps geometry as WKT:
    cmd <- paste('ogr2ogr -f CSV', csvfile, shpfile, '-lco GEOMETRY=AS_WKT')
    system(cmd)  # executes command
  } else {
    print(paste('output file already exists, please delete', csvfile, 'before converting again'))
  }
  return(csvfile)
}

Read the new csv into R:

# for file 'foo.shp':
foo_raw <- read.csv(sys_SHP2CSV(shp='foo'), sep = ',')

One might do any number of things now, some here lets snag some columns and rename them:

# rename the subset of data "foo" we want in a data.frame:
foo <- data.frame(foo_raw[1:5])
colnames(foo) <- c('bar', 'eggs', 'ham', 'hello', 'world')

We could do some more careful parsing too, here a semicolon in cell strings can be converted to a comma:

# replace ` ; ` to ` , ` in col "bar":
foo$bar <- gsub(pattern=";", replacement=",", foo$bar)

Do whatever you do for an output directory:

# make a output file directory if you're into that
# my preference is to only keep one set of output files per run
# here, we'd reset the directory before adding any new output files
redir <- function(outdir) {
  if (dir.exists(outdir)) {
    system(paste('rm -rf', outdir))
  }
  dir.create(outdir)
}

Of course, once your data is in R there are countless "R things" one could do...

# iterate to fill empty cells with preceding values
for (i in 1:length(foo[,1])) {
  if (nchar(foo$bar[i]) < 1) {
    foo$bar[i] <- foo$bar[i-1]
  }
  # fill incomplete rows with NA values:
  if (nchar(foo$bar[i]) < 1) {
    foo[i,] <- NA  
  }
}

# remove NA rows if there is nothing better to do:
newfoo <- na.omit(foo)

Even though this is totally adding a level of complexity to what could be a single ogr2ogr command, I've decided it is still worth it- I'd definitely rather keep track of everything I do over forget what I did.... xD

# make some methods to write out various kinds of files via gdal:
to_GEO <- function(target) {
  print(paste('converting', target, 'to geojson .... '))
  system(paste('ogr2ogr -f', " geojson ",  paste0(target, '.geojson'), paste0(target, '.csv')))
}

to_SHP <- function(target) {
  print(paste('converting ', target, ' to ESRI Shapefile .... '))
  system(paste('ogr2ogr -f', " 'ESRI Shapefile' ",  paste0(target, '.shp'), paste0(target, '.csv')))
}

# name files:
foo_name <- 'output_foo'

# for table data 'foo', first:
write.csv(foo, paste0(foo_name, '.csv'))

# convert with the above csv:
to_SHP(foo_name)

Cheers!
-Jess

Mac OSX: Fixing GPT and PMBR Tables

My computer recently crashed very, very hard, while I was removing an small empty alternative OS partition I no longer needed.  This is a fairly mundane operation that I do now and again, and is a ongoing fight to keep at least a few gigs of space free for actual work on precious 250gb Mac SSD.  

The crash results?  Toasted GPT tables all around.   My 2015 computer’s next move was to reboot- only to find essentially no partitions of memory… at all.  What it did show was (wait for it) Clover bootloader of all things, with a single windows boot camp icon (nothing in there either).  That is so wrong…. On all levels!

I brought the machine to the local university repair.  They declared this machine bricked and offered to wipe it.  Back to me it came…

I scheduled an Apple support session with a phone rep, which after around 45 minutes of actually productive troubleshooting ideas (none helping though) was forwarded to a senior supervisor.  She was interested in this problem, and we scheduled a larger block of time. But, in the meantime, I still wanted to try again….

How to recover a garbled GPT table for Mac OSX:

Start with clean SMC and PRAM / NVRAM.

Clearing these actually made accessing internet recovery (how we get to a stand-in OS with a terminal) dozens of times faster.  2.5 hours to 7 minutes. I actually waited 2.5 hours twice on separate attempts before I cleared these.

Follow these Apple links to perform these operations:

https://support.apple.com/en-us/HT204063

https://support.apple.com/en-us/HT201295

Get the computer with a text editor open.

Restart the computer into internet recovery.  Command + R or Command + Shift + R.

Wait.

Open a Terminal.  The graphical disk utility is useless because the disk / partition we want is unreachable(so it will say everything is great).

Run:

diskutil list

For me, I see disk0s2 is 180.6 gb.  That’s my stuff!

I also found /dev/disk2 → /dev/disk14 to be tiny partitions- don’t worry about those.

The syntax you are looking for is:

Name: “untitled” Identifier: disk#

(NOT disk#s#)

Write down ALL of the above information for the disk you are after.  That is probably disk0.

Then:

gpt -r show disk0

Copy the following readout in your terminal for all entries bigger than “32”.  The critical fields here are Start, Size, Index, and Contents. Each field is supremely important.

Here is mine (formatted for web):

# Disk0, with contents > “32” :

# First Table:

Start: 40  

Size: 409600

Index:  1

Contents: C12A7328-F81F-11D2-BA4B-00A0C93EC93B

# Second table, the one with my data:

Start: 409640

Size: 352637568

Index: 2

Contents: FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF

Note, this is the initial Contents.  I rewrote this once with the correct Apple Index 2 data but did not create a new table (leaving the rest of the broken bits broken).  We are replacing / destroying a table here, but not the data.     

Actions:

# unmount the disk.  From here we are doing tables, not disks / data.

diskutil unmountDisk disk0

# Get rid of the GPT on the disk we are recovering.  We are not touching the data.

gpt destroy disk0

# Make a new one to start with some fresh values.

gpt create -f disk0

# perform magic trick

# USE THE DATA YOU WROTE DOWN FROM “gpt -r show disk0”.  THIS IS IMPORTANT.

# we must add that first small partition at index 1.  Verbatim.

gpt add -i 1 -b 40 -s 409600 -t C12A7328-F81F-11D2-BA4B-00A0C93EC93B disk0

# index two (for me) is my data.  We are going to use the default OSX / Mac HD partition values.

# the Length of “372637568” is not as sure fire as the GPT Contents.  

# YMMV, but YOLO.

gpt add -i 2 -b 409640 -s 372637568 -t 7C3457EF-0000-11AA-AA11-00306543ECAC disk0

Again, that Contents value is 7C3457EF-0000-11AA-AA11-00306543ECAC.

- Jess

written in the recovered computer xD

Evaluating Ubuntu Pop OS: Dual Boot Setup

Dual OS on a 2015 MacBook pro

As the costs of Apple computers continue to skyrocket and the price of useable amounts of storage zoom past a neighboring galaxy (for a college student at least), I am always on on the hunt for cost effective solutions to house and process big projects and large data.

Pop OS (a neatly wrapped Ubuntu) is the in-house OS from System76.  After looking through their catalog of incredible computers and servers, I thought it would be a good time to see how far I can go with an Ubuntu daily driver.  Of course, there are many major and do-not-pass-go downsides- see the below list:

  • Logic Pro X → There is no replacement 🙁   A killer DAW with fantastic AU libraries. I am versed with Reaper and Bitwig, but neither is as complete as Logic Pro.  I will be evaluating POP with an installation of Reaper, but with so few plugins (I own very few third party sets) this is not a fair replacement.
  • Adobe PS and LR:  I do not like Adobe, but these programs are... ...kind of crucial for most project of mine that involve 2d, raster graphics.  I continue to use Inkscape for many tasks, but it is irrelevant when it comes to pixel-based work and photo management / bulk operations.
  • AutoCAD / Fusion 360 / Sketchup:  I like FreeCAD a lot, but it is not at all like the other programs.  Not worse or better, but these are all very different animals for different uses.
  • Apple notes and other apple-y things:  OSX is extremely refined. Inter-device solutions are superb.  I have gotten myself used to Google Keep, but it is not quite at the in-house Apple level.
  • XCode and IOS Simulator environments:  I do use Expo, but frankly to make products for Apple you need a Mac.

Dual Boot (OSX and Pop Ubuntu) Installation on a 2015 MBP:

This process is quite simple, and only calls for a small handful of post-installation tweaks.  My intent is to create a small sandbox with minimal use of “extras” (no extra boot managers or anything like that)

Steps:

Partition separate “boot”, “home”, and other drives

  • I am using a 256gb micro sd partitioned in half for OSX and Pop_OS (Sandisk extreme, “v3” speed rating version card via a BaseQi slot adapter)

Use the partition tool in Mac disk utility.  Be sure to set these new partitions as FAT 32- we will be using ext4 and other more linux-y filesystems upon installation, so these need to be as generic as possible.

Get a copy of Pop_OS from System76.

Use Etcher (recommended) or any other image burning tool to create a boot key for Pop.  

The USB key only has one small job, in which Pop_os will be burned into a better location in your boot partition made in the previous step.  If you are coming from a hackintosh experience, fear not: everything will stay in the Macbook Pro, not extra USB safety dongles or Kexts, or Plist mods…!

BOOT INTO POP_OS:

Restart your computer and hold down the alt-option Key.  THIS IS HOW TO SWITCH from Pop_os, OSX, Bootcamp, and anything else you have in there.  You should see an “efi” option next to the default OSX. (note- at least in my case, the built-in bootloader defaults to the last used OS at each restart.)

Once you are in the Pop_OS installer, click through and select the appropriate partitions when prompted.  After this installation, you may remove the USB key and continue to select
“efi” in the bootloader.


ASSUMING ALL GOES WELL:

You are now in Pop_OS!  Using the alt/option key will become second nature… but some Pop key mappings may not.  Continue for a list of Macbook Pro - specific tweaks and notes.

First moves:

Go to the Pop Shop and get the “Tweaks” tool.  I made one or two small keymap changes, but this is likely personal preference.  

Default, important Key Mappings:

Command will act as a “control center-ish” thing.  It will not copy or paste anything for you.

Control does what Command did on OSX.  

Terminal uses Control+Shift for copy and paste, but only in Terminal:  if you pull a Control+Shift+C in Chrome, you will get the Dev tool GUI...  The Shift key thing is needed unless you are inclined to root around and change it.

Custom Boot Scripts and Services:

In an effort to make things simple, I made a shell script to house the processes I want running when I turn on the computer- this is to streamline the “.service” making process.  While it may only take marginally more time to make a new service, this way I can keep track of what is doing what from a file in my documents folder.

In terminal, go to where your services live if you want to look:

cd /etc/systemd/system

Or, cut to the chase:

sudo nano /etc/systemd/system/startsh.sh.service

Paste the following into this new file:

_____________Begin _After_This_Line____________________

[Unit]

Description=Start at Open plz

[Service]

ExecStart=/Documents/startsh.sh

[Install]

WantedBy=multi-user.target

_____________End _Above_This_Line____________________

Exit nano (saving as you go) and cd back to “/”.

cd

sudo nano /Documents/startsh.sh

Paste the following (and any scripts you may want, see the one I have commented out for odrive CLI) into this new file:

_____________Begin _After_This_Line____________________

#!/bin/bash

# Uncomment the following if you want 24/7 odrive in your system

# otherwise do whatever you want

#nohup "$HOME/.odrive-agent/bin/odriveagent" > /dev/null 2>&1 &

# end

_____________End _Above_This_Line____________________

After exiting the shell script, start it all up with the following:

sudo systemctl start startsh.sh

sudo systemctl enable startsh.sh

Cloud file management with Odrive CLI and Odrive Utilities:

Visit one of the two Odrive CLI pages- this one has linux in it:

https://forum.odrive.com/t/odrive-sync-agent-a-cli-scriptable-interface-for-odrives-progressive-sync-engine-for-linux-os-x-and-windows/499#linuxinst

Please visit this repo to get going with --recursive and other odrive utilities

https://github.com/amagliul/odrive-utilities


These are the two commands I ended up putting in a markdown file on my desktop for easy access.  Nope, not nearly as cool as it is on OSX. But it works…

Odrive sync: [-h] for help

```

python "$HOME/.odrive-agent/bin/odrive.py" sync

```

Odrive utilities:

```

python "$HOME/odrive-utilities/odrivecli.py" sync --recursive

```

Next, Get Some Apps:

Download Chrome.  Sign into Chrome to get your chrome OS apps loaded into the launcher- in my case, I needed Chrome remote desktop.  DO NOT DOWNLOAD ADDITIONAL PACKAGES for Chrome Remote Desktop, if that is your thing. They will halt all system tools (disk utils, Gnome terminal, graphical file viewer…   !!See this thread, it happened to me!! )

Stock up!  

Get Atom editor:  https://atom.io/

...Or my favorites: https://www.jetbrains.com/toolbox/app/

Rstudio:  https://www.rstudio.com/products/rstudio/download/#download

Mysql:  https://dev.mysql.com/downloads/mysql/

MySQL Workbench:  https://dev.mysql.com/downloads/workbench/

If you get stuck:  make sure you have tried installing as root ($ sudo su -) and verified passwords with ($ sudo mysql_secure_installation)  

See here to start “rooting around” MySQL issues:  https://stackoverflow.com/questions/50132282/problems-installing-mysql-in-ubuntu-18-04/50746032#50746032

Get some GIS tools:

QGIS!

sudo apt-get install qgis python-qgis qgis-plugin-grass

uGet for bulk USGS data download!

sudo add-apt-repository ppa:plushuang-tw/uget-stable

sudo apt install uget

That's all for now- Cheers!

-Jess

INFO: Deploy a Shiny web app in R using AWS (EC2 Red Hat)

Info on deploying a Shiny web app in R using AWS (EC2 Redhat)

As a follow-up to my post on how to create an AWS RStudio server, the next logical step is to host some useful apps you created in R for people to use.  A common way to do this is the R-specific tool Shiny, which is built in to RStudio.  Learning the syntax to convert R code into a Shiny app is rather subtle, and can be hard.  I plan to do a more thorough demo on this- particularly the use of the $ symbol, as in “input$output”- later. 🙂

 

It turns out hosting a Shiny Web app provides a large number of opportunities for things to go wrong….  I will share what worked for me.  All of this info is accessed via SSH, to the server running Shiny and RStudio.

 

I am using the AWS “Linux 2” AMI, which is based on the Red Hat OS.  For reference, here is some extremely important Red Hat CLI language worth being familiar with and debugging:

 

sudo yum install” and “wget” are for fetching and installing things like shiny.  Don’t bother with instructions that include “apt-get install”, as they are for a different Linux OS!

 

sudo chmod -R 777” is how you change your directory permissions for read, write, and execute (all of those enabled).  This is handy if your server disconnecting when the app tries to run something- it is a simple fix to a problem not always evident in the logs.  The default root folder from which shiny apps are hosted and run is “/srv/shiny-server” (or just “/srv” to be safe).

 

nano /var/log/shiny-server.log” is the location of current shiny logs.

 

sudo stop shiny-server” followed by “sudo start shiny-server” is the best way to restart the server- “sudo restart shiny-server” is not a sure bet on any other process.  It is true, other tools like a node.js server or nginx could impact the success of Shiny- If you think nginx is a problem, “cd /ect/nginx” followed by “ls” will get you in the right direction.  Others have cited problems with Red Hat not including the directories and files at “/etc/nginx/sites-available”.  You do not need these directories.  (though they are probably important for other things).

 

sudo rm -r” is a good way to destroy things, like a mangled R studio installation.  Remember, it is easy enough to start again fresh!  🙂

 

sudo nano /etc/shiny-server/shiny-server.conf” is how to access the config file for Shiny.  The fresh install version I used did not work!  There will be lots of excess in that file, much of which can causes issues in a bare-bones setup like mine.  One important key is to ensure Shiny is using a root user- see my example file below.  I am the root user here (jess)- change that to mirror- at least for the beginning- the user defined as root in your AWS installation.  See my notes HERE on that- that is defined in the advanced settings of the EC2 instance.

 

BEGIN CONFIG FILE:   (or click to download) *Download is properly indented


# Define user: this should be the same user as the AWS root user!
#
run_as jess;
#
# Define port and where the home (/) directory is
# Define site_dir/log_dir - these are the defaults
#
server{
listen 3838;
location / {
site_dir /srv/shiny-server;
log_dir /var/log/shiny-server;
directory_index on;
}
}

END CONFIG FILE

Well, the proof is in the pudding.   At least for now, you can access a basic app I made that cleans csv field data files that where entered into excel by hand.  They start full of missing fields and have a weird two-column setup for distance- the app cleans all these issues and returns a 4 column (from 5 column) csv.

Download the test file here:   2012_dirt_PCD-git

And access the app here:  Basic Shiny app on AWS!

Below is an iFrame into the app, just to show how very basic it is.  Give it a go!

-Jess

How to make a AWS R server

When you need an R server and have lots of data to process, AWS is a great way to go.   Sign up of the free tier and poke around!

Creating an AWS Rstudio server:

https://aws.amazon.com/blogs/big-data/running-r-on-aws/ - using both the R snippet (works but the R core bits are NOT present and it will not work yet) and the JSON snippet provided  

https://www.rstudio.com/products/rstudio/download-server/ - the suite being installed

Follow most of the AWS blog AMI info, with the following items:

AMI:  Amazon Linux 2 (more packages and extras v. standard)  

  • t2.micro (free tier)
  • IAM policy follows AWS blog JSON snippet
  • Security Policy contains open inbound ports 22, 8787, 3838 (the latter two for R server specific communication)
  • Append user, username:password in the blog post’s initial r studio install text (pasted into the “advanced” text box when completing the AMI setup

 

SSH into the EC2 instance

sudo yum install –y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

sudo yum-config-manager --enable epel

sudo yum repolist

wget https://download2.rstudio.org/rstudio-server-rhel-1.1.423-x86_64.rpm

sudo yum update -y

sudo yum install -y R

sudo rstudio-server verify-installation

 

Access the graphical R server:

In a web browser, tack on “:8787” to the end of the Instance’s public “connect” link.  If it doesn’t load a login window (but seems to be trying to connect to something) the security policy is probably being overzealous……..

 

Notes on S3-hosted data:

  • S3 data is easiest to use if it is set to be public.
  • There are s3-specific tools for R, accessible as packages from CRAN directly from the R interface
  • Note data (delimited text at least) hosted in S3 will behave differently than it does locally, e.g. spaces, “na”, “null” need to be “cleaned” in R before use.  

 

There we have it!

 

-Jess

DIY CMoy headphone Amp, point to point: worth it?

 

 

 

 

 

 

 

This is an extraordinarily simple headphone amp, and has essentially reached legendary/history status at this point.  I decided to build the original design (there are countless mods and totally different amps that hark back to this one), though opting for a wall-wart desk form factor instead of the original 9v battery tin.

 

I am building from the the 2008 "williamneo" blog post, as I like that point to point layout of his.  (I also could not easily procure the now ancient radioshack proto board- yes, the one shaped like a crab)  🙂

References:

http://williamneo.blogspot.com/2008/01/diy-cmoy-headphone-amplifier-for.html

https://tangentsoft.net/audio/cmoy/

 

opting for a desktop form-factor and 12 volts, my little amp works fine for easy to drive headphones.... But does it actually sound better than, say, an iphone?

 

The truth is, while it gets much louder than a phone, the OPA jfet op amp powering the whole thing is simply NOT high end.  it will distort with too much input (after a few clicks on a phone before the phone is maxed out for reference- well below standard line/"dac" level in both home and pro audio voltage wise) and will incrementally break up with hard/loud songs on bigger headphones, as the volume and rocking out goes up- in my case, the current fostex RP evolution I have been mangling is the big cheese candidate.  the planar drivers in the RP series are definitely "very hard" to drive in the scheme of things, but anything "heavy weight" will simply not do when pushed to an accessible limit.  I CAN use smaller dynamic/efficient  headphones with ease, such as my ported and open hifiman edition S without experiencing distortion.  Still here though, it is clear this amp is a "diy/cheap access to power"- slated against the fiio e12 for example, the CMOY seems a bit.... Loose?  Not any better, that is fore sure!  (granted, the e12 is a fantastic budget amp)

 

Note of op amps:   the NTE and other non-OPA brand-name look alikes and electrical analogues sound terrible.  I was fussing around with the NTE variants after needing the short out resistors "R5" (very important when building:  do not use R5) when they actually blew out from my using a inversely-poled power supply by accident, and was getting frustrated as the amp was working but sounded like a making TIDAL streaming gurgle out from a cheese grater.  eventually, after a ebay shipment of brand name OPAs came, I popped one in cautiously and it turned out the opamp was the source of the.... cheese grating.

In all, a good project for skill building but falls short of anything "hifi".

-Jess

Wolf Pine @ Fox Park #1 +Bonus Winter Birds

Today and yesterday, 2.19.17 - 2.20.17, have officially kicked off my first real visits to my "sit spot" (required for all adventure ed students at PSU) and commutes around campus armed with my bird rig, ready for the warmer-weather inclined birds.

Observations from the Wolf Pine @ Fox Park:

Snowshoed into Fox Park around 2:15 on Sunday, 2.19.17.

Weather:  After repeated heavy snow falls, Sunday was the first day solidly above freezing- thus a large amount of dripping and snow-condensing was happening.  My wolf pine was in a bit of a freezing puddle, with ~2 feet of snow accumulation surrounding its base.   High pressure day, bluish-grey skies and scattered wispy clouds.  Light breeze, and fairly quiet.

Upon quieting myself and my raucous snow-hoverboards, it became apparent how few birds and squirrels were about.    I could hear "whispers" and chips from passerines, but they sounded far away, likely to be lower on the hill, near the squishy earth and faux-pond.  Squirrels maybe rustled a branch or two during my sit- note the trees where about half evergreen and probably not a food source for these little mammals.  These trees  would, however, provide good coverage from avian predators...  I wonder if the squirrels have thought of that.

Perhaps the surrounding homes and intermittent (not on Sunday) construction sounds provided a safer space park wide.  Owls and to a lesser extent hawks are irked to no end by these sounds and regular but unpredictable human activity.  I have observed elsewhere in MA owls are not put off by circadian dog walkers at all;  in fact, I would glean most of my "big bird" info from the unperturbed 2 - 3 times a day dog walkers of my neighborhood.   Great horned families, bald eagles, and belted kingfisher pairs could care less about 2 dozen or more dogs pass under their homes a day, but the moment a motor boat, police cars, or loud parties occurred these unbelievable species would vanish.   I make this digression because this is a college town, and the park is surrounded by active dwellings of different sorts, including development sites.  THUS:  there were essentially no rodents/lagomorphs/etc.  (easily findable ones that is)

Speaking of which, the tracks were tough to figure out.  Heavy dogs?  Yes.  beyond that, the melting snow and dripping was creating a fairly non-descript blanket over any crazy prints.

I noticed remarkable BIG woodpecker activity, i.e. Pileated and Red Bellied/flicker- especially on my way out of the park.  Holy smokes are the pileated OCD around here!

Also Note the intersting spiraling growth pattern on this Wolf Pine limb.  It is long dead, but appears and felt denser than "ye average" pine tree.  ??

I plan to get back to my spot ASAP for more warm weather observations.  I believe this is the forecast all week!

BONUS WINTER BIRDS FROM MY COMMUTE THIS MORNING:

A loud house finch and a lovely Bohemian waxwing.