Trans: Latin prefix implying "across" or "Beyond", often used in gender nonconforming situations – Scend: Archaic word describing a strong "surge" or "wave", originating with 15th century english sailors – Survival: 15th century english compound word describing an existence only worth transcending.

Category: Lifestyle/How-To (Page 1 of 2)

Annotators, interpreters & audio demo stuff

...Notes, Repo

....Even more demos @ ai.columbari.us

miscellaneous dregs, bits, bobs, demos in this playlist on youtube

Continue reading

Client-side, asynchronous HTTP methods- TypeScript

...Despite the ubiquitousness of needing to make a POST request from a browser (or, perhaps for this very reason) there seems to be just as many ways, methods, libraries, and standards of implementing http functions in JavaScript as there are people doing said implementing. Between the adoption of the fetch api in browsers and the prevalence and power of Promises in JS, asynchronous http needn't be a hassle!

/*
...happily processing some data in a browser, when suddenly...
....panik!
you need to complete a portion of this processing elsewhere on some server...:
*/

Continue reading

Obliterate non-removable MDM profiles enforced by Apple’s Device Enrollment Program

Or, when life gives you apples, use Linux

Seemingly harder to remove with every eye-glazing gist and thread... A mac plagued with an is_mdm_removable=false Mobile Device Management profile: the worst! 🙂

First, boot into recovery mode by rebooting while holding down the Command & R keys.

At this stage, you'll need to connect to the internet briefly to download the recovery OS. This provides a few tools including like disk utility, support, an osx reinstaller- at the top menu, you'll find an option to access a terminal.

Once in there, you'll want to:

Disable SIP:

csrutil disable

Then reboot:

reboot now

While holding down Command + Option + P + R to start afresh with cleared NVRAM.

Reboot once again while holding down the Command & R keys to return to the recovery OS. Reinstall whatever version of OSX it offers- instead of trying to deal with the slippery, network connected DEP plists & binaries contained within the various LaunchAgents and LaunchDaemons found in the /System/Library directories directly, we'll let Apple finish with the ConfigurationProfiles first, then sneak in and remove them.

While this stuff is cooking, get yourself a usb stick and a penguin, such as Budgie:

wget -nd http://cdimage.ubuntu.com/ubuntu-budgie/releases/20.04.1/release/ubuntu-budgie-20.04.1-desktop-amd64.iso
umount /dev/sdc 2>/dev/null || true
sudo dd if=ubuntu-budgie-20.04.1-desktop-amd64.iso of=/dev/sdc bs=1048576 && sync

Boot up again, this time holding the Option key for the bootloader menu. Once in the live usb system, make sure you can read Apples HFS filesystem:

sudo apt-get install hfsprogs

For me at least, I needed to run a quick fsck to fix up the headers before I could mount the osx filesystem living at /dev/sda2 (sda1 is the efi partition):

sudo fsck.hfsplus /dev/sda2

Now, lets go in there and remove those ConfigurationProfiles:

mkdir badapple
sudo mount -o force /dev/sda2 badapple
cd badapple
sudo rm -rf private/var/db/ConfigurationProfiles/*

🙂

clipi CLI!

Find this project on my github here!

...post updated 07/19/2020

An efficient toolset for Pi devices

Emulate, organize, burn, manage a variety of distributions for Raspberry Pi.


Choose your own adventure....

Emulate:
clipi virtualizes many common sbc operating systems with QEMU, and can play with both 32 bit and 64 bit operating systems.

  • Select from any of the included distributions (or add your own to /sources.toml!) and clipi will handle the rest.

Organize:
clipi builds and maintains organized directories for each OS as well a persistent & convenient .qcow2 QEMU disk image.

  • Too many huge source .img files and archives? clipi cleans up after itself under the Utilities... menu.
  • additional organizational & gcc compilation methods are available in /kernel.py

Write:
clipi burns emulations to external disks! Just insert a sd card or disk and follow the friendly prompts. All files, /home, guest directories are written out.

  • Need to pre-configure (or double check) wifi? Add your ssid and password to /wpa_supplicant.conf and copy the file to /boot in the freshly burned disk.
  • Need pre-enabled ssh? copy /ssh to /boot too.
  • clipi provides options for writing from an emulation's .qcow2 file via qemu...
  • ...as well as from the source's raw image file with the verbatim argument

Manage:
clipi can find the addresses of all the Raspberry Pi devices on your local network.

  • Need to do this a lot? clipi can install itself as a Bash alias (option under the Utilities... menu, fire it up whenever you want.

Shortcuts:

Shortcuts & configuration arguments can be passed to clipi as a .toml (or yaml) file.

  • Shortcut files access clipi's tools in a similar fashion to the interactive menu:
# <shortcut>.toml
# you can access the same tools and functions visible in the interactive menu like so:
'Burn a bootable disk image' = true  
# same as selecting in the interactive cli
'image' = 'octoprint'
'target_disk' = 'sdc'  
  • clipi exposes many features only accessible via configuration file arguments, such as distribution options and emulation settings.
# <shortcut>.toml

# important qemu arguments can be provided via a shortcut file like so:
'kernel' = "bin/ddebian/vmlinuz-4.19.0-9-arm64"
'initrd' = "bin/ddebian/initrd.img-4.19.0-9-arm64"

# qemu arguments like these use familiar qemu lexicon:
'M' = "virt" 
'm' = "2048"

# default values are be edited the same way:
'cpu' = "cortex-a53"
'qcow_size' = "+8G"
'append' = '"rw root=/dev/vda2 console=ttyAMA0 rootwait fsck.repair=yes memtest=1"'

# extra arguments can be passed too:
'**args' = """
-device virtio-blk-device,drive=hd-root \
-no-reboot -monitor stdio
"""

# additional network arguments can be passed like so:
# (clipi may automatically modify network arguments depending on bridge / SLiRP settings)
'network' = """
-netdev bridge,br=br0,id=net0 \
-device virtio-net-pci,netdev=net0
"""
  • Supply a shortcut file like so:
    python3 clipi.py etc/find_pi.toml

  • take a look in /etc for some shortcut examples and default values

TODOs & WIPs:

bridge networking things:

  • working on guest --> guest, bridge --> host, host only mode networking options.
    as of 7/17/20 only SLiRP user mode networking works,
    see branch broken_bridge-networking
    to see what is currently cooking here

kernel stuff:

  • automate ramdisk & kernel extraction-
    most functions to do so are all ready to go in /kernel.py

  • other random kernel todos-

    • working on better options for building via qemu-debootstrap from chroot instead of debian netboot or native gcc
    • add git specific methods to sources.py for mainline Pi linux kernel
      • verify absolute binutils version
      • need to get cracking on documentation for all this stuff

gcp-io stuff:

  • formalize ddns.py & dockerfile

  • make sure all ports (22, 80, 8765, etc) can up/down as reverse proxy

# clone:
git clone https://github.com/Jesssullivan/clipi
cd clipi

# preheat:
pip3 install -r requirements.txt
# (or pip install -r requirements.txt)

# begin cooking some Pi:
python3 clipi.py

Parse fdisk -l in Python

fdisk -l has got to be one of the more common disk-related commands one might use while fussing about with raw disk images. The fdisk utility is ubiquitous across linux distributions (also brew install gptfdisk and brew cask install gdisk, supposedly). The -l argument provides a quick look raw sector & file system info. Figuring out the Start, End, Sectors, Size, Id, Format of a disk image's contents without needing to mount it and start lurking around is handy, just the sort of thing one might want to do with Python. Lets write a function to get these attributes into a dictionary- here's mine:

import subprocess
import re

def fdisk(image):

    #  `image`, a .img disk image:
    cmd = str('fdisk -l ' + image)

    # read fdisk output- everything `cmd` would otherwise print to your console on stdout
    # is instead piped into `proc`.
    proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)

    # the raw stuff from stdout is not parseable as is, so we read into a string:
    result = proc.stdout.read().__str__()

    # figure out what type we should iterate with when looking via file / part contained within image.  I have no idea if anything besides .img will work- YMMV, but YOLO xD
    if '.iso' in result:
        iter = '.iso'
    if '.qcow2' in result:
        iter = '.qcow2'
    else:  
        iter = '.img'

    # chop up fdisk results by file / partition-
    # the resulting `parts` are equivalent to fdisk "rows" in the shell
    parts = re.findall(r'' + iter + r'\d', result)

    # dictionary `disk` contains each "row" from `parts`:
    disk = {}
    for p in parts:
        # sub dictionary 'part' contains the handy fdisk output values:
        part = {}
        # get just the number words with regex sauce:
        line = result.split(p)[1]
        words = re.split(r'\s+', line)
        # place each word into 'part':
        part['Start'] = words[1]
        part['End'] = words[2]
        part['Sectors'] = words[3]
        part['Size'] = words[4]
        part['Id'] = words[5]
        part['Format'] = words[6].split('\\n')[0]
        # stick this part into 'disk', move onto next disk part:
        disk[p] = part
    return disk

The eBird API & regionCode

get this script and other GIS bits here on github

The Ebird dataset is awesome. While directly handling data as a massive delimited file- as distributed by the eBird people- is cumbersome at best, the ebird api offers a fairly straightforward and efficient alternative for a few choice bits and batches of data.

  • The eBird AWK tool for filtering the actual delimited data can be found over here:

    install.packages("auk")

It is worth noting R + auk (or frankly any R centered filtering method) will quickly become limited by the single-threaded approach of R, and how you're managing memory as you iterate. Working and querying the data from a proper database quickly becomes necessary.

Most conveniently, the eBird API already exists- snag an key over here.

...The API package for R is over here:
install.packages("rebird")

...There is also a neat Python wrapper over here:
pip3 install ebird-api

Region Codes:

I'm not sure why, but some methods use normal latitude / longitude in decimal degrees while some others use "regionCode", which seems to be some kind of eBird special. Only ever seen this format in ebird data.

For example, recent observations uses regionCode:

# GET Recent observations in a region:
# https://api.ebird.org/v2/data/obs/{{regionCode}}/recent

...But nearby recent observations uses latitude / longitude:

# GET Recent nearby observations:
# https://api.ebird.org/v2/data/obs/geo/recent?lat={{lat}}&lng={{lng}}

Regardless, lets just write a function to convert decimal degrees to this regionCode thing. Here's mine:

#!/usr/bin/env python3
"""
# provide latitude & longitude, return eBird "regionCode"
Written by Jess Sullivan
@ https://www.transscendsurvival.org/
available at: 
https://raw.githubusercontent.com/Jesssullivan/GIS_Shortcuts/master/regioncodes.py
"""
import requests
import json

def get_regioncode(lat, lon):

    # this municipal api is a publicly available, no keys needed afaict
    census_url = str('https://geo.fcc.gov/api/census/area?lat=' +
                     str(lat) +
                     '&lon=' +
                     str(lon) +
                     '&format=json')

    # send out a GET request:
    payload = {}
    get = requests.request("GET", census_url, data=payload)

    # parse the response, all api values are contained in list 'results':
    response = json.loads(get.content)['results'][0]

    # use the last three digits from the in-state fips code as the "subnational 2" identifier:
    fips = response['county_fips']

    # assemble and return the "subnational type 2" code:
    regioncode = 'US-' + response['state_code'] + '-' + fips[2] + fips[3] + fips[4]
    print('formed region code: ' + regioncode)
    return regioncode

Prius, Printers

Add EV only mode button for 2009 Prius-

Pinouts and wiring reference here:
http://www.calcars.org/prius-evbutton-install.pdf

Big shiny EV mode button in the Prius!

Fusion 360 files here: https://a360.co/2zOACJJ

Files uploaded to thingiverse here: https://www.thingiverse.com/thing:4422091

xposted to prius chat too:
https://priuschat.com/threads/3d-printed-ev-mode-button-xd.216774/

PLA & Carbon Polycarbonate button housings:


While we're at it....

See more notes on D&M 3d Printer stuff on github here:
https://github.com/Jesssullivan/Funmat-HT-Notes
...and here:
https://github.com/Jesssullivan/AeroTaz5_hotfix

Install Adobe Applications on AWS WorkSpaces

By default, the browser based authentication used by Adobe’s Creative Cloud installers will fail on AWS WorkSpace instances. Neither the installer nor Windows provide much in the way of useful error messages- here is how to do it!

Open Server Manager. Under “Local Server”, open the “Internet Explorer Enhanced Security Configuration”- *(mercy!)* - and turn it off.

Good Lord

##### Tada! The sign on handoff from the installer→Browser→ back to installer will now work fine. xD

Convert .heic –> .png

on github here, or just get this script:

wget https://raw.githubusercontent.com/Jesssullivan/misc/master/etc/heic_png.sh

Well, following the current course of Apple’s corporate brilliance, iOS now defaults to .heic compression for photos.

Hmmm.

Without further delay, let's convert these to png, here from the sanctuary of Bash in ♡Ubuntu Budgie♡.

Libheif is well documented here on Github BTW

#!/bin/bash
# recursively convert .heic to png
# by Jess Sullivan
#
# permiss:
# sudo chmod u+x heic_png.sh
#
# installs heif-convert via ppa:
# sudo ./heic_png.sh
#
# run as $USER:
# ./heic_png.sh

command -v heif-convert >/dev/null || {

  echo >&2 -e "heif-convert not intalled! \nattempting to add ppa....";

  if [[ $EUID -ne 0 ]]; then
     echo "sudo is required to install, aborting."
     exit 1
  fi

  add-apt-repository ppa:strukturag/libheif
  apt-get install libheif-examples -y
  apt-get update -y

  exit 0

  }

# default behavior:

for fi in *.heic; do

  echo "converting file: $fi"

  heif-convert $fi $fi.png

 # FWIW, convert to .jpg is faster if png is not required 
 # heif-convert $fi $fi.jpg

  done

D&M Shields – Fusion 360

As of 4/4/20, we are busy 3d printing our rigid shield design, efficiently hacked into its current form by Bret here at D&M. click here to visit or download the Fusion files!

The flat, snap-fit nature of this design can easily be lasercut as well- the varied depths of the printed model are just an effort to minimize excess plastic and print time.

More to come on the laser side of things- in addition to the massive time savings- like <20 seconds vs. >3 hours per shield- we can use far cheaper and varied materials with the addition of our sterilizable and durable UV resins and coatings. Similarly, lasercut stock + resin offers the possibility quick adaptation and derivative design, such as [flexible](https://a360.co/2UFKRHM) UV cured forms.

Ubuntu on Captive Portal WiFi

wget https://raw.githubusercontent.com/Jesssullivan/misc/master/sel/portal.py

Some distros (Ubuntu and its derivatives on my Macbook for instance) struggle to find a route to the captive portal on public networks (read: Starbucks). Assuming you want to connect the way they intend (via the gateway through the captive portal) because you are a rule abiding patron, all you need to do is visit the address of the gateway. There is no need to fiddle with your network drivers, disable SSL stuff or engage in plebian network tomfoolery.

"""
find and open wifi captive portal (such as Starbucks)
written by Jess Sullivan
"""
from netifaces import gateways
from sys import argv
from time import sleep
import subprocess

help_str = str("\n " +
               'usage: \n ' +
               ' -h : print this message again \n' +
               ' -i : `pip3 install netifaces`  \n' +
               'You may specify a browser argument to complete the portal, such as \n' +
               'google-chrome')

def argtype():
    try:
        if len(argv) > 1:
            use = True
        elif len(argv) == 1:
            use = False
            print(help_str)
        else:
            print('command takes 0 or 1 args, use -h for help')
            raise SystemExit
    except:
        print('arg error... \n command takes 0 or 1 args, use -h for help')
        raise SystemExit
    return use

def main():

    all_gates = gateways()
    target = all_gates['default'][2][0]

    if argtype():

        if argv[1] == '-h':
            print(help_str)
            quit()

        if argv[1] == '-i':
            try:
                subprocess.Popen('pip3 install netifaces',
                                 shell=True,
                                 executable='/bin/bash')
                sleep(1)
                quit()
            except:
                print('err running pip3 install netifaces')
                quit()

        else:

            print(str('opening portal in ' + argv[1]))
            subprocess.Popen(str(argv[1] + ' ' + str(target)),
                             shell=True,
                             executable='/bin/bash')
            sleep(1)
            quit()

    else:

        print(str('\n please visit address ' + target +
                  ' in a browser to complete portal setup \n'))
        quit()

main()

JDK Management in R

Quickly & forcefully manage extra JDKs in base R
Simplify rJava woes

# get this script:
wget https://raw.githubusercontent.com/Jesssullivan/rJDKmanager/master/JDKmanager.R

rJava is depended upon by lots of libraries- XLConnect, OpenStreetMap, many db connectors and is often needed while scripting with GDAL.

library(XLConnect)   # YMMV

Errors while importing a library with depending on a JDK are many, but can (usually) be resolved by reconfiguring the version listed somewhere in the error.

On mac OSX (on Mojave at least), check what you have installed here- (as admin, this is a system path) :

sudo ls  "/Library/Java/JavaVirtualMachines/ 

I seem to usually have at least half a dozen or more versions in there, between Oracle and openJDK. Being Java, these are basically sandboxed as JVMs and are will not get in each others way.

However...

Unlike JDK configuration for just about everything else, aliasing or exporting a specific release to $PATH will not cut it in R. The shell command to reconfigure for R-

sudo R CMD javareconf

...seems to always choose the wrong JDK. Renaming, hiding, otherwise trying to explain to R the one I want (lib XLConnect currently wants none other than Oracle 11.0.1) is futile.
The end-all solution for me is usually to temporarily move other JDKs elsewhere.
This is not difficult to do now and again, but keeping a CLI totally in R for moving / replacing JDKs makes for organized scripting.

 JDKmanager help: 
 (args are not case sensitive) 
 (usage: `sudo rscript JDKmanager.R help`) 

 list    :: prints contents of default JDK path and removed JDK path 
 reset   :: move all JDKs in removed JDK path back to default JDK path 
 config ::  configure rJava.  equivalent to `R CMD javareconf` in shell 

 specific JDK, such as 11.0.1, 1.8,openjdk-12.0.2, etc: 
    searches through both default and removed pathes for the specific JDK.  
    if found in the default path, any other JDKs will be moved to the `removed JDKs` directory. 
    the specified JDK will be configured for rJava.

Decentralized Pi Video Monitoring w/ motioneye & BATMAN

Visit the me here on Github
Added parabolic musings 10/16/19, see below

...On using motioneye video clients on Pi Zeros & Raspbian over a BATMAN-adv Ad-Hoc network

link: motioneyeos
link: motioneye Daemon
link: Pi Zero W Tx/Rx data sheet:
link: BATMAN Open Mesh

This implementation of motioneye is running on Raspbian Buster (opposed to motioneyeos).

Calculating Mesh Effectiveness w/ Python:
Please take a look at dBmLoss.py- the idea here is one should be able to estimate the maximum plausible distance between mesh nodes before setting anything up. It can be run with no arguments-

python3 dBmLoss.py

...with no arguments, it should use default values (Tx = 20 dBm, Rx = |-40| dBm) to print this:

you can add (default) Rx Tx arguments using the following syntax:
                 python3 dBmLoss.py 20 40
                 python3 dBmLoss.py <Rx> <Tx>                 

 57.74559999999994 ft = max. mesh node spacing, @
 Rx = 40
 Tx = 20

Regarding the Pi:
The Pi Zero uses an onboard BCM43143 wifi module. See above for the data sheet. We can expect around a ~19 dBm Tx signal from a BCM43143 if we are optimistic. Unfortunately, "usable" Rx gain is unclear in the context of the Pi.

Added 10/16/19:
Notes on generating an accurate parabolic antenna shape with FreeCAD’s Python CLI:

For whatever reason, (likely my own ignorance) I have been having trouble generating an accurate parabolic dish shape in Fusion 360 (AFAICT, Autodesk is literally drenching Fusion 360 in funds right now, I feel obligated to at least try). Bezier, spline, etc curves are not suitable!
If you are not familiar with FreeCAD, the general approach- geometry is formed through fully constraining sketches and objects- is quite different from Sketchup / Tinkercad / Inventor / etc, as most proprietary 3d software does the “constraining” of your drawings behind the scenes. From this perspective, you can see how the following script never actually defines or changes the curve / depth of the parabola; all we need to do is change how much curve to include. A wide, shallow dish can be made by only using the very bottom of the curve, or a deep / narrow dish by including more of the ever steepening parabolic shape.

import Part, math

# musings derived from:
# https://forum.freecadweb.org/viewtopic.php?t=4430

# thinking about units here:
tu = FreeCAD.Units.parseQuantity

def mm(value):
    return tu('{} mm'.format(value))

rs = mm(1.9)
thicken = -(rs / mm(15)) 

# defer to scale during fitting / fillet elsewhere 
m=App.Matrix()
m.rotateY(math.radians(-90))
# create a parabola with the symmetry axis (0,0,1)
parabola=Part.Parabola()
parabola.transform(m)

# get only the right part of the curve
edge=parabola.toShape(0,rs)
pt=parabola.value(rs)
line=Part.makeLine(pt,App.Vector(0,0,pt.z))
wire=Part.Wire([edge,line])
shell=wire.revolve(App.Vector(0,0,0),App.Vector(0,0,1),360)

# make a solid
solid=Part.Solid(shell)

# apply a thickness
thick=solid.makeThickness([solid.Faces[1]],thicken,0.001)
Part.show(thick)

Gui.SendMsgToActiveView("ViewFit")

"""
# Fill screen:
Gui.SendMsgToActiveView("ViewFit")
# Remove Part in default env:
App.getDocument("Unnamed1").removeObject("Shape")
"""

FWIW, here is my Python implimentation of a Tx/Rx "Free Space" distance calulator-
dBmLoss.py:

from math import log10
from sys import argv
'''
# estimate free space dBm attenuation:
# ...using wfi module BCM43143:

Tx = 19~20 dBm
Rx = not clear how low we can go here

d = distance Tx --> Rx
f = frequency
c = attenuation constant: meters / MHz = -27.55; see here for more info:
https://en.wikipedia.org/wiki/Free-space_path_loss
'''

f = 2400  # MHz
c = 27.55 # RF attenuation constant (in meters / MHz)

def_Tx = 20  # expected dBm transmit
def_Rx = 40  # (absolute value) of negative dBm thesh

def logdBm(num):
    return 20 * log10(num)

def maxDist(Rx, Tx):
    dBm = 0
    d = .1  # meters!
    while dBm < Tx + Rx:
        dBm = logdBm(d) + logdBm(f) - Tx - Rx + c
        d += .1  # meters!
    return d

# Why not use this with arguments Tx + Rx from shell if we want:
def useargs():
    use = bool
    try:
        if len(argv) == 3:
            use = True
        elif len(argv) == 1:
            print('\n\nyou can add (default) Rx Tx arguments using the following syntax: \n \
                python3 dBmLoss.py 20 40 \n \
                python3 dBmLoss.py <Rx> <Tx> \
                \n')
            use = False
        else:
            print('you must use both Rx & Tx arguments or no arguments')
            raise SystemExit
    except:
        print('you must use both Rx & Tx arguments or no arguments')
        raise SystemExit
    return use

def main():

    if useargs() == True:
        arg = [int(argv[1]), int(argv[2])]
    else:
        arg = [def_Rx, def_Tx]

    print(str('\n ' + str(maxDist(arg[0], arg[1])*3.281) + \
        ' ft = max. mesh node spacing, @ \n' + \
        ' Rx = ' + str(arg[0]) + '\n' + \
        ' Tx = ' + str(arg[1])))

main()

Google Calendar API with Chapel & Python

Mirroring the repo here:

Google Calendar API with Chapel & Python

Despite Chapel's many quirks and annoyances surrounding string handling, its efficiency and ease of running in parallel are always welcome. The idea here is a Chapel script that may need to weed through enormous numbers of files while looking for a date tag ($D + other tags currently) is probably a better choice overall than a pure Python version. (the intent is to test this properly later.)

As of 9/19/19, there is still a laundry list of things to add- control flow (for instance, “don't add the event over and over”) less brittle syntax, annotations, actual error handling, etc. It does find and upload calendar entries though!

I am using Python with the Google Calendar API (see here: https://developers.google.com/calendar/v3/reference/) in a looping Daemon thread. All the sifting for tags is managed with the Chapel binary, which dumps anything it finds into a csv from which the daemon will push calendar entries with proper formatting. FWIW, Google’s dates (datetime.datetime) adhere to RFC3339 (https://tools.ietf.org/html/rfc3339) which conveniently is the default of the datetime.isoformat() method.

Some pesky things to keep in mind:

This script uses a sync$ variable to lock other threads out of an evaluation during concurrency. So far I think the easiest way to manage the resulting domain is from within a module like so:

module charMatches {
  var dates : domain(string);
}

Here, domain charMatches.dates will need to accessed as a reference variable from any procedures that need it.

proc dateCheck(aFile, ref choice) {
    ...
}
... 
coforall folder in walkdirs('check/') {
    for file in findfiles(folder) {
        dateCheck(file, charMatches.dates);
    }
}

errors like:

error: unresolved call '_ir_split__ref_string.size'

unresolved call 'norm(promoted expression)'

...or other variants of:
string.split().size  (length, etc)

...Tie into a Chapel Specification issue.

https://github.com/chapel-lang/chapel/issues/7982

The short solution is do not use .split; instead, I have been chopping strings with .partition().

// like so:
...
if choice.contains(line.partition(hSep)[3].partition(hTerminate)[1]) == false {
    ...process string...
    ...
}

Summer 2019 Update!

GIS Updates:

Newish Raster / DEM image → STL tool in the Shiny-Apps repo:

https://github.com/Jesssullivan/Shiny-Apps

See the (non-load balanced!) live example on the Heroku page:

https://kml-tools.herokuapp.com/

Summarized for a forum member here too:  https://www.v1engineering.com/forum/topic/3d-printing-tactile-maps/

CAD / CAM Updates:

Been revamping my CNC thoughts- 

Basically, the next move is a complete rebuild (primarily for 6061 aluminum).

I am aiming for:

  • Marlin 2.x.x around either a full-Rambo or 32 bit Archim 1.0 (https://ultimachine.com/
  • Dual endstop configuration, CNC only (no hotend support)
  • 500mm2 work area / swappable spoiler boards (~700mm exterior MPCNC conduit length)
  • Continuous compressed air chip clearing, shop vac / cyclone chip removal
  • Two chamber, full acoustic enclosure (cutting space + air I/O for vac and compressor)
  • Full octoprint networking via GPIO relays

FWIW: Sketchup MPCNC:

https://3dwarehouse.sketchup.com/model/72bbe55e-8df7-42a2-9a57-c355debf1447/MPCNC-CNC-Machine-34-EMT

Also TinkerCAD version:

https://www.tinkercad.com/things/fnlgMUy4c3i

Electric Drivetrain Development:

BORGI / Axial Flux stuff:

https://community.occupycars.com/t/borgi-build-instructions/37

Designed some rough coil winders for motor design here:

https://community.occupycars.com/t/arduino-coil-winder/99

Repo:  https://github.com/Jesssullivan/Arduino_Coil_Winder

Also, an itty-bitty, skate bearing-scale axial flux / 3-phase motor to hack upon:

https://www.tinkercad.com/things/cTpgpcNqJaB


Cheers-

- Jess

Notes on a Free and Open Source Notes App:  Joplin

Joplin for all your Operating Systems and devices

As a lifelong IOS + OSX user (Apple products), I have used many, many notes apps over the years.  From big name apps like OmniFocus, Things 3, Notes+, to all the usual suspects like Trello, Notability, Notemaster, RTM, and others, I always eventually migrate back to Apple notes, simply because it is always available and always up to date.  There are zero “features” besides this convenience, which is why I am perpetually willing to give a new app a spin.

Joplin is free, open source, and works on OSX, Windows, Linux operating systems and IOS and Android phones.  

Find it here:

https://joplin.cozic.net/

brew install joplin 

The most important thing this project has nailed is cloud support and syncing.  I have my iPhone and computers syncing via Dropbox, which is easy to setup and works….  really well. Joplin folks have added many cloud options, so this is unlikely to be a sticking point for users.

Here are some of the key features:

  • Markdown is totally supported for straightforward and easy formatting
  • External editor support for emacs / atom / etc folks
  • Layout is clean, uncluttered, and just makes sense
  • Built-in markdown text editor and viewer is great
  • Notebook, todo, note, and tags work great across platforms
  • Browser integration, E2EE security, file attachments, and geolocation included

Hopefully this will be helpful.

Cheers,

- Jess

Mac OSX: Fixing GPT and PMBR Tables

My computer recently crashed very, very hard, while I was removing an small empty alternative OS partition I no longer needed.  This is a fairly mundane operation that I do now and again, and is a ongoing fight to keep at least a few gigs of space free for actual work on precious 250gb Mac SSD.  

The crash results?  Toasted GPT tables all around.   My 2015 computer’s next move was to reboot- only to find essentially no partitions of memory… at all.  What it did show was (wait for it) Clover bootloader of all things, with a single windows boot camp icon (nothing in there either).  That is so wrong…. On all levels!

I brought the machine to the local university repair.  They declared this machine bricked and offered to wipe it.  Back to me it came…

I scheduled an Apple support session with a phone rep, which after around 45 minutes of actually productive troubleshooting ideas (none helping though) was forwarded to a senior supervisor.  She was interested in this problem, and we scheduled a larger block of time. But, in the meantime, I still wanted to try again….

How to recover a garbled GPT table for Mac OSX:

Start with clean SMC and PRAM / NVRAM.

Clearing these actually made accessing internet recovery (how we get to a stand-in OS with a terminal) dozens of times faster.  2.5 hours to 7 minutes. I actually waited 2.5 hours twice on separate attempts before I cleared these.

Follow these Apple links to perform these operations:

https://support.apple.com/en-us/HT204063

https://support.apple.com/en-us/HT201295

Get the computer with a text editor open.

Restart the computer into internet recovery.  Command + R or Command + Shift + R.

Wait.

Open a Terminal.  The graphical disk utility is useless because the disk / partition we want is unreachable(so it will say everything is great).

Run:

diskutil list

For me, I see disk0s2 is 180.6 gb.  That’s my stuff!

I also found /dev/disk2 → /dev/disk14 to be tiny partitions- don’t worry about those.

The syntax you are looking for is:

Name: “untitled” Identifier: disk#

(NOT disk#s#)

Write down ALL of the above information for the disk you are after.  That is probably disk0.

Then:

gpt -r show disk0

Copy the following readout in your terminal for all entries bigger than “32”.  The critical fields here are Start, Size, Index, and Contents. Each field is supremely important.

Here is mine (formatted for web):

# Disk0, with contents > “32” :

# First Table:

Start: 40  

Size: 409600

Index:  1

Contents: C12A7328-F81F-11D2-BA4B-00A0C93EC93B

# Second table, the one with my data:

Start: 409640

Size: 352637568

Index: 2

Contents: FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF

Note, this is the initial Contents.  I rewrote this once with the correct Apple Index 2 data but did not create a new table (leaving the rest of the broken bits broken).  We are replacing / destroying a table here, but not the data.     

Actions:

# unmount the disk.  From here we are doing tables, not disks / data.

diskutil unmountDisk disk0

# Get rid of the GPT on the disk we are recovering.  We are not touching the data.

gpt destroy disk0

# Make a new one to start with some fresh values.

gpt create -f disk0

# perform magic trick

# USE THE DATA YOU WROTE DOWN FROM “gpt -r show disk0”.  THIS IS IMPORTANT.

# we must add that first small partition at index 1.  Verbatim.

gpt add -i 1 -b 40 -s 409600 -t C12A7328-F81F-11D2-BA4B-00A0C93EC93B disk0

# index two (for me) is my data.  We are going to use the default OSX / Mac HD partition values.

# the Length of “372637568” is not as sure fire as the GPT Contents.  

# YMMV, but YOLO.

gpt add -i 2 -b 409640 -s 372637568 -t 7C3457EF-0000-11AA-AA11-00306543ECAC disk0

Again, that Contents value is 7C3457EF-0000-11AA-AA11-00306543ECAC.

- Jess

written in the recovered computer xD

Musings On Chapel Language and Parallel Processing

View below the readme mirror from my Github repo. Scroll down for my Python3 evaluation script.

....Or visit the page directly: https://github.com/Jesssullivan/ChapelTests 

ChapelTests

Investigating modern concurrent programming ideas with Chapel Language and Python 3

See here for dupe detection: /FileChecking-with-Chapel

Iterating through all files for custom tags / syntax: /GenericTagIterator

added 9/14/19:

The thinking here is one could write a global, shorthand / tag-based note manager making use of an efficient tag gathering tool like the example here. Gone are the days of actually needing a note manager- when the need presents itself, one could just add a calendar item, todo, etc with a global tag syntax.

The test uses $D for date: $D 09/14/19

//  Chapel-Language  //

// non-annotated file @ /GenericTagIterator/nScan.chpl //

use FileSystem;
use IO;
use Time;

config const V : bool=true;  // verbose logging, currently default!

module charMatches {
  var dates = {("")};  
}

// var sync1$ : sync bool=true;  not used in example- TODO: add sync$ var back in!!

proc charCheck(aFile, ref choice, sep, sepRange) {

    // note, reference argument (ref choice) is needed if using Chapel structure "module.domain"

    try {
        var line : string;
        var tmp = openreader(aFile);
        while(tmp.readline(line)) {
            if line.find(sep) > 0 {
                choice += line.split(sep)[sepRange];
                if V then writeln('adding '+ sep + ' ' + line.split(sep)[sepRange]);
            }
        }
    tmp.close();
    } catch {
      if V then writeln("caught err");
    }
}

coforall folder in walkdirs('check/') {
    for file in findfiles(folder) {
        charCheck(file, charMatches.dates, '$D ', 1..8);
    }
}

Get some Chapel:

In a (bash) shell, install Chapel:
Mac or Linux here, others refer to:

https://chapel-lang.org/docs/usingchapel/QUICKSTART.html

# For Linux bash:
git clone https://github.com/chapel-lang/chapel
tar xzf chapel-1.18.0.tar.gz
cd chapel-1.18.0
source util/setchplenv.bash
make
make check

#For Mac OSX bash:
# Just use homebrew
brew install chapel # :)

Get atom editor for Chapel Language support:

#Linux bash:
cd
sudo apt-get install atom
apm install language-chapel
# atom [yourfile.chpl]  # open/make a file with atom

# Mac OSX (download):
# https://github.com/atom/atom
# bash for Chapel language support
apm install language-chapel
# atom [yourfile.chpl]  # open/make a file with atom

Using the Chapel compiler

To compile with Chapel:

chpl MyFile.chpl # chpl command is self sufficient

# chpl one file class into another:

chpl -M classFile runFile.chpl

# to run a Chapel file:
./runFile

Now Some Python3 Evaluation:

# Ajacent to compiled FileCheck.chpl binary:

python3 Timer_FileCheck.py

Timer_FileCheck.py will loop FileCheck and find the average times it takes to complete, with a variety of additional arguments to toggle parallel and serial operation. The iterations are:

ListOptions = [Default, Serial_SE, Serial_SP, Serial_SE_SP]
  • Default - full parallel

  • Serial evaluation (--SE) but parallel domain creation

  • Serial domain creation (--SP) but parallel evaluation

  • Full serial (--SE --SP)

Output is saved as Time_FileCheck_Results.txt

  • Output is also logged after each of the (default 10) loops.

The idea is to evaluate a "--flag" -in this case, Serial or Parallel in FileCheck.chpl- to see of there are time benefits to parallel processing. In this case, there really are not any, because that program relies mostly on disk speed.

Evaluation Test:

# Time_FileCheck.py
#
# A WIP by Jess Sullivan
#
# evaluate average run speed of both serial and parallel versions
# of FileCheck.chpl  --  NOTE: coforall is used in both BY DEFAULT.
# This is to bypass the slow findfiles() method by dividing file searches
# by number of directories.

import subprocess
import time

File = "./FileCheck" # chapel to run

# default false, use for evaluation
SE = "--SE=true"

# default false, use for evaluation
SP = "--SP=true" # no coforall looping anywhere

# default true, make it false:
R = "--R=false"  #  do not let chapel compile a report per run

# default true, make it false:
T = "--T=false" # no internal chapel timers

# default true, make it false:
V = "--V=false"  #  use verbose logging?

# default is false
bug = "--debug=false"

Default = (File, R, T, V, bug) # default parallel operation
Serial_SE = (File, R, T, V, bug, SE)
Serial_SP = (File, R, T, V, bug, SP)
Serial_SE_SP = (File, R, T, V, bug, SP, SE)


ListOptions = [Default, Serial_SE, Serial_SP, Serial_SE_SP]

loopNum = 10 # iterations of each runTime for an average speed.

# setup output file
file = open("Time_FileCheck_Results.txt", "w")

file.write(str('eval ' + str(loopNum) + ' loops for ' + str(len(ListOptions)) + ' FileCheck Options' + "\n\\"))

def iterateWithArgs(loops, args, runTime):
    for l in range(loops):
        start = time.time()
        subprocess.run(args)
        end = time.time()
        runTime.append(end-start)

for option in ListOptions:
    runTime = []
    iterateWithArgs(loopNum, option, runTime)
    file.write("average runTime for FileCheck with "+ str(option) + "options is " + "\n\\")
    file.write(str(sum(runTime) / loopNum) +"\n\\")
    print("average runTime for FileCheck with " + str(option) + " options is " + "\n\\")
    print(str(sum(runTime) / loopNum) +"\n\\")

file.close()

Evaluating Ubuntu Pop OS: Dual Boot Setup

Dual OS on a 2015 MacBook pro

As the costs of Apple computers continue to skyrocket and the price of useable amounts of storage zoom past a neighboring galaxy (for a college student at least), I am always on on the hunt for cost effective solutions to house and process big projects and large data.

Pop OS (a neatly wrapped Ubuntu) is the in-house OS from System76.  After looking through their catalog of incredible computers and servers, I thought it would be a good time to see how far I can go with an Ubuntu daily driver.  Of course, there are many major and do-not-pass-go downsides- see the below list:

  • Logic Pro X → There is no replacement 🙁   A killer DAW with fantastic AU libraries. I am versed with Reaper and Bitwig, but neither is as complete as Logic Pro.  I will be evaluating POP with an installation of Reaper, but with so few plugins (I own very few third party sets) this is not a fair replacement.
  • Adobe PS and LR:  I do not like Adobe, but these programs are... ...kind of crucial for most project of mine that involve 2d, raster graphics.  I continue to use Inkscape for many tasks, but it is irrelevant when it comes to pixel-based work and photo management / bulk operations.
  • AutoCAD / Fusion 360 / Sketchup:  I like FreeCAD a lot, but it is not at all like the other programs.  Not worse or better, but these are all very different animals for different uses.
  • Apple notes and other apple-y things:  OSX is extremely refined. Inter-device solutions are superb.  I have gotten myself used to Google Keep, but it is not quite at the in-house Apple level.
  • XCode and IOS Simulator environments:  I do use Expo, but frankly to make products for Apple you need a Mac.

Dual Boot (OSX and Pop Ubuntu) Installation on a 2015 MBP:

This process is quite simple, and only calls for a small handful of post-installation tweaks.  My intent is to create a small sandbox with minimal use of “extras” (no extra boot managers or anything like that)

Steps:

Partition separate “boot”, “home”, and other drives

  • I am using a 256gb micro sd partitioned in half for OSX and Pop_OS (Sandisk extreme, “v3” speed rating version card via a BaseQi slot adapter)

Use the partition tool in Mac disk utility.  Be sure to set these new partitions as FAT 32- we will be using ext4 and other more linux-y filesystems upon installation, so these need to be as generic as possible.

Get a copy of Pop_OS from System76.

Use Etcher (recommended) or any other image burning tool to create a boot key for Pop.  

The USB key only has one small job, in which Pop_os will be burned into a better location in your boot partition made in the previous step.  If you are coming from a hackintosh experience, fear not: everything will stay in the Macbook Pro, not extra USB safety dongles or Kexts, or Plist mods…!

BOOT INTO POP_OS:

Restart your computer and hold down the alt-option Key.  THIS IS HOW TO SWITCH from Pop_os, OSX, Bootcamp, and anything else you have in there.  You should see an “efi” option next to the default OSX. (note- at least in my case, the built-in bootloader defaults to the last used OS at each restart.)

Once you are in the Pop_OS installer, click through and select the appropriate partitions when prompted.  After this installation, you may remove the USB key and continue to select
“efi” in the bootloader.


ASSUMING ALL GOES WELL:

You are now in Pop_OS!  Using the alt/option key will become second nature… but some Pop key mappings may not.  Continue for a list of Macbook Pro - specific tweaks and notes.

First moves:

Go to the Pop Shop and get the “Tweaks” tool.  I made one or two small keymap changes, but this is likely personal preference.  

Default, important Key Mappings:

Command will act as a “control center-ish” thing.  It will not copy or paste anything for you.

Control does what Command did on OSX.  

Terminal uses Control+Shift for copy and paste, but only in Terminal:  if you pull a Control+Shift+C in Chrome, you will get the Dev tool GUI...  The Shift key thing is needed unless you are inclined to root around and change it.

Custom Boot Scripts and Services:

In an effort to make things simple, I made a shell script to house the processes I want running when I turn on the computer- this is to streamline the “.service” making process.  While it may only take marginally more time to make a new service, this way I can keep track of what is doing what from a file in my documents folder.

In terminal, go to where your services live if you want to look:

cd /etc/systemd/system

Or, cut to the chase:

sudo nano /etc/systemd/system/startsh.sh.service

Paste the following into this new file:

_____________Begin _After_This_Line____________________

[Unit]

Description=Start at Open plz

[Service]

ExecStart=/Documents/startsh.sh

[Install]

WantedBy=multi-user.target

_____________End _Above_This_Line____________________

Exit nano (saving as you go) and cd back to “/”.

cd

sudo nano /Documents/startsh.sh

Paste the following (and any scripts you may want, see the one I have commented out for odrive CLI) into this new file:

_____________Begin _After_This_Line____________________

#!/bin/bash

# Uncomment the following if you want 24/7 odrive in your system

# otherwise do whatever you want

#nohup "$HOME/.odrive-agent/bin/odriveagent" > /dev/null 2>&1 &

# end

_____________End _Above_This_Line____________________

After exiting the shell script, start it all up with the following:

sudo systemctl start startsh.sh

sudo systemctl enable startsh.sh

Cloud file management with Odrive CLI and Odrive Utilities:

Visit one of the two Odrive CLI pages- this one has linux in it:

https://forum.odrive.com/t/odrive-sync-agent-a-cli-scriptable-interface-for-odrives-progressive-sync-engine-for-linux-os-x-and-windows/499#linuxinst

Please visit this repo to get going with --recursive and other odrive utilities

https://github.com/amagliul/odrive-utilities


These are the two commands I ended up putting in a markdown file on my desktop for easy access.  Nope, not nearly as cool as it is on OSX. But it works…

Odrive sync: [-h] for help

```

python "$HOME/.odrive-agent/bin/odrive.py" sync

```

Odrive utilities:

```

python "$HOME/odrive-utilities/odrivecli.py" sync --recursive

```

Next, Get Some Apps:

Download Chrome.  Sign into Chrome to get your chrome OS apps loaded into the launcher- in my case, I needed Chrome remote desktop.  DO NOT DOWNLOAD ADDITIONAL PACKAGES for Chrome Remote Desktop, if that is your thing. They will halt all system tools (disk utils, Gnome terminal, graphical file viewer…   !!See this thread, it happened to me!! )

Stock up!  

Get Atom editor:  https://atom.io/

...Or my favorites: https://www.jetbrains.com/toolbox/app/

Rstudio:  https://www.rstudio.com/products/rstudio/download/#download

Mysql:  https://dev.mysql.com/downloads/mysql/

MySQL Workbench:  https://dev.mysql.com/downloads/workbench/

If you get stuck:  make sure you have tried installing as root ($ sudo su -) and verified passwords with ($ sudo mysql_secure_installation)  

See here to start “rooting around” MySQL issues:  https://stackoverflow.com/questions/50132282/problems-installing-mysql-in-ubuntu-18-04/50746032#50746032

Get some GIS tools:

QGIS!

sudo apt-get install qgis python-qgis qgis-plugin-grass

uGet for bulk USGS data download!

sudo add-apt-repository ppa:plushuang-tw/uget-stable

sudo apt install uget

That's all for now- Cheers!

-Jess

Deploy Shiny R apps along Node.JS

Find the tools in action on Heroku as a node.js app!

https://kml-tools.herokuapp.com/

See the code on GitHub:

https://github.com/Jesssullivan/Shiny-Apps

After many iterations of ideas regarding deployment for a few research Shiny R apps, I am glad to say the current web-only setup is 100% free and simple to adapt.   I thought I'd go through some of the Node.JS bits I have been fussing with. 

The Current one:  

Heroku has a free tier for node.js apps.  See the pricing and limitations here: https://www.heroku.com/pricing as far as I can tell, there is little reason to read too far into a free plan; they don’t have my credit card, and thy seem to convert enough folks to paid customers to be nice enough to offer a free something to everyone.  

Shiny apps- https://www.shinyapps.io/- works straight from RStudio.  They have a free plan. Similar to Heroku, I can care too much about limitations as it is completely free.  

The reasons to use Node.JS (even if it just a jade/html wrapper) are numerous, though may not be completely obvious.  If nothing else, Heroku will serve it for free….

Using node is nice because you get all the web-layout-ux-ui stacks of stuff if you need them.  Clearly, I have not gone to many lengths to do that, but it is there.

Another big one is using node.js with Electron.  https://electronjs.org/ The idea is a desktop app framework serves up your node app to itself, via the chromium.  I had a bit of a foray with Electron- the node execa npm install execa package let me launch a shiny server from electron, wait a moment, then load a node/browser app that acts as a interface to the shiny process.  While this mostly worked, it is definitely overkill for my shiny stuff.  Good to have as a tool though.

-Jess

Quick fix: 254 character limit in ESRI Story Map?

https://gis.stackexchange.com/questions/75092/maximum-length-of-text-fields-in-shapefile-and-geodatabase-formats

https://en.wikipedia.org/wiki/GeoJSON

https://gis.stackexchange.com/questions/92885/ogr2ogr-converting-kml-to-geojson

If you happened to be working with....  KML data (or any data with large description strings) and transitioning it into the ESRI Story Map toolset, there is a very good chance  you hit the the dBase 254 character length limit with the ESRI Shapefile upload.  Shapefiles are always a terrible idea.

 

the solution:  with GDAL or QGIS (alright, even in ArcMap), one can use GeoJSON as an output format AND import into the story map system- with complete long description strings!

 

QGIS:

Merge vector layers -> save to file -> GeoJSON

arcpy:
import arcpy

import os

arcpy.env.workspace = "/desktop/arcmapstuff"

arcpy.FeaturesToJSON_conversion(os.path.join("outgdb.gdb", "myfeatures"), "output.json")

GDAL:
<
ogr2ogr -f GeoJSON output.json input.kml

New App:  KML Search and Convert

Written in R; using GDAL/EXPAT libraries on Ubuntu and hosted with AWS EC2.

New App:  KML Search and Convert

Here is an simple (beta) app of mine that converts KML files into Excel-friendly CSV documents.  It also has a search function, so you can download a subset of data that contains keywords.   🙂

The files will soon be available in Github.

I'm still working on a progress indicator; it currently lets you download before it is done processing.   Know a completely processed file is titled with "kml2csv_<yourfile>.csv".

...YMMV.  xD

GDAL for R Server on Ubuntu – KML Spatial Libraries and More

GDAL for R Server on Red Hat Xenial Ubuntu - KML Spatial Libraries and More

If you made the (possible mistake) of running with a barebones Red Hat Linux instance, you will find it is missing many things you may want in R.   I rely on GDAL (the definitive Geospatial Data Abstraction Library) on my local OSX R setup, and want it on my server too.  GDAL contains many libraries you need to work with KML, RGDAL, and other spatial packages.  It is massive and usually take a long time to sort out on any machine.

These notes assume you are already involved with a R server (usually port 8787 in a browser).  I am running mine from an EC2 instance with AWS.

! Note this is a fresh server install, using Ubuntu; I messed up my original ones while trying to configure GDAL against conflicting packages. If you are creating a new one, opt for at least a T2 medium (or go bigger) and find the latest Ubuntu server AMI.  For these instructions, you want an OS that is as generic as possible.

On Github:

https://github.com/Jesssullivan/rhel-bits

From Bash:

# SSH into the EC2 instance: (here is the syntax just in case)

#ssh -i "/Users/YourSSHKey.pem" ec2-user@yourAWSinstance.amazonaws.com

sudo su -

apt-get update

apt-get upgrade

nano /etc/apt/sources.list

#enter as a new line at the bottom of the doc:

deb https://cloud.r-project.org/bin/linux/ubuntu xenial/

#exit nano

wget https://raw.githubusercontent.com/Jesssullivan/rhel-bits/master/xen-conf.sh

chmod 777 xen-conf.sh

./xen-conf.sh

Or...

From SSH:

# SSH into the EC2 instance: (here is the syntax just in case)

ssh -i "/Users/YourSSHKey.pem" ec2-user@yourAWSinstance.amazonaws.com

# if you can, become root and make some global users- these will be your access to

# RStudio Server and shiny too!

sudo su –

adduser <Jess>

# Follow the following prompts carefully to create the user

apt-get update

nano /etc/apt/sources.list

# enter as a new line at the bottom of the doc:

deb https://cloud.r-project.org/bin/linux/ubuntu xenial/

# exit nano

# Start, or try bash:

apt-get install r-base

apt-get install r-base-dev

apt-get update

apt-get upgrade

wget http://download.osgeo.org/gdal/2.3.1/gdal-2.3.1.tar.gz

tar xvf gdal-2.3.1.tar.gz

cd  gdal-2.3.1

# begin making GDAL: this all takes a while

./configure  [if your need proper kml support (like me), search on configuring with expat or libkml.   There are many more options for configuration based on other packages that can go here, and this is the step to get them in order...]

sudo make

sudo make install

cd # Try entering R now and check the version!

# Start installing RStudio server and Shiny

apt-get update

apt-get upgrade
sudo apt-get install gdebi-core
wget https://download2.rstudio.org/rstudio-server-1.1.456-amd64.deb
sudo gdebi rstudio-server-1.1.456-amd64.deb

# Enter R or go to the graphical R Studio installation in your browser

R

# Authenticate if using the graphical interface using the usr:pwd you defined earlier

# this will take a long time

install.packages(“rgdal”)

# Note any errors carefully!

Then:

install.packages(“dplyr”)

install.packages(c("data.table", "tidyverse”, “shiny”)  # etc

Well, there you have it!

-Jess

Extras:

##Later, ONLY IF you NEED Anaconda, FYI:

# Get Anaconda: this is a large package manager, and is could be used for patching up missing # dependencies:

#Use  "ls" followed by rm -r <anaconda> (fill in with ls results) to remove conflicting conda

# installers if you have any issue there, I am starting fresh:

mkdir binconda

# *making a weak attempt at sandboxing the massive new package manager installation*

cd binconda
wget http://repo.continuum.io/archive/Anaconda2-4.3.0-Linux-x86_64.sh
# install and follow the prompts
bash Anaconda2-5.2.0-Linux-x86_64.sh

# Close the terminal window completely and start a new one, and ssh back to where you left

# off.  Conda install requires this.

# open and SSH back into your instance.  You should now have either additional flexibility in

# either patching holes in dependencies, or created some large holes in your server.  YMMV.

### Done

Red Hat stuff:

Follow these AWS instructions if you are doing something else:

https://aws.amazon.com/blogs/big-data/running-r-on-aws/

See my notes on this here:

https://www.transscendsurvival.org/2018/03/08/how-to-make-a-aws-r-server/

and notes on Shiny server:

https://www.transscendsurvival.org/2018/07/16/deploy-a-shiny-web-app-in-r-using-aws-ec2-red-hat/

GDAL on Red Hat:- Existing threads on this:

https://gis.stackexchange.com/questions/120101/building-gdal-with-libkml-support/120103#120103

This is a nice short thread about building from source:

https://gis.stackexchange.com/questions/263495/how-to-install-gdal-on-centos-7-4

neat RPM package finding tool, just in case:

https://rpmfind.net/linux/rpm2html/

Info on the LIBKML driver if you end up with issues there:

http://www.gdal.org/drv_libkml.html

 

I hope this is useful- GDAL is important and best to set it up early.  It will be a pain, but so is losing work while trying to patch it in later.  xD

 

-Jess

 

INFO: Deploy a Shiny web app in R using AWS (EC2 Red Hat)

Info on deploying a Shiny web app in R using AWS (EC2 Redhat)

As a follow-up to my post on how to create an AWS RStudio server, the next logical step is to host some useful apps you created in R for people to use.  A common way to do this is the R-specific tool Shiny, which is built in to RStudio.  Learning the syntax to convert R code into a Shiny app is rather subtle, and can be hard.  I plan to do a more thorough demo on this- particularly the use of the $ symbol, as in “input$output”- later. 🙂

 

It turns out hosting a Shiny Web app provides a large number of opportunities for things to go wrong….  I will share what worked for me.  All of this info is accessed via SSH, to the server running Shiny and RStudio.

 

I am using the AWS “Linux 2” AMI, which is based on the Red Hat OS.  For reference, here is some extremely important Red Hat CLI language worth being familiar with and debugging:

 

sudo yum install” and “wget” are for fetching and installing things like shiny.  Don’t bother with instructions that include “apt-get install”, as they are for a different Linux OS!

 

sudo chmod -R 777” is how you change your directory permissions for read, write, and execute (all of those enabled).  This is handy if your server disconnecting when the app tries to run something- it is a simple fix to a problem not always evident in the logs.  The default root folder from which shiny apps are hosted and run is “/srv/shiny-server” (or just “/srv” to be safe).

 

nano /var/log/shiny-server.log” is the location of current shiny logs.

 

sudo stop shiny-server” followed by “sudo start shiny-server” is the best way to restart the server- “sudo restart shiny-server” is not a sure bet on any other process.  It is true, other tools like a node.js server or nginx could impact the success of Shiny- If you think nginx is a problem, “cd /ect/nginx” followed by “ls” will get you in the right direction.  Others have cited problems with Red Hat not including the directories and files at “/etc/nginx/sites-available”.  You do not need these directories.  (though they are probably important for other things).

 

sudo rm -r” is a good way to destroy things, like a mangled R studio installation.  Remember, it is easy enough to start again fresh!  🙂

 

sudo nano /etc/shiny-server/shiny-server.conf” is how to access the config file for Shiny.  The fresh install version I used did not work!  There will be lots of excess in that file, much of which can causes issues in a bare-bones setup like mine.  One important key is to ensure Shiny is using a root user- see my example file below.  I am the root user here (jess)- change that to mirror- at least for the beginning- the user defined as root in your AWS installation.  See my notes HERE on that- that is defined in the advanced settings of the EC2 instance.

 

BEGIN CONFIG FILE:   (or click to download) *Download is properly indented


# Define user: this should be the same user as the AWS root user!
#
run_as jess;
#
# Define port and where the home (/) directory is
# Define site_dir/log_dir - these are the defaults
#
server{
listen 3838;
location / {
site_dir /srv/shiny-server;
log_dir /var/log/shiny-server;
directory_index on;
}
}

END CONFIG FILE

Well, the proof is in the pudding.   At least for now, you can access a basic app I made that cleans csv field data files that where entered into excel by hand.  They start full of missing fields and have a weird two-column setup for distance- the app cleans all these issues and returns a 4 column (from 5 column) csv.

Download the test file here:   2012_dirt_PCD-git

And access the app here:  Basic Shiny app on AWS!

Below is an iFrame into the app, just to show how very basic it is.  Give it a go!

-Jess

Off-Grid File Sharing with SAMBA / GL.iNet

Note:  SMB / SharePoint is surely better with a proper server/computer.  A Raspberry Pi running OpenMediaVault (Debian) is a more common and robust option (still 5v low power).

If you are actually in an "it must done in OpenWRT" scenario, Click Here for my Samba config file: OpenWRT_Samba-config and see below.  Also, please use a NTFS or EX4 format.  🙂

 

...While my sharing method wasn't actually adopted by others, I still think it is good to know!

-Jess

How to Query KML point data as CSV using QGIS and R

How to Query KML point data as CSV using QGIS and R

Here you can see more than 800 points, each describing an observation of an individual bird.  This data is in the form of KML, a sort of XML document from Google for spatial data.

 

I want to know which points have “pair” or “female” in the description text nodes using R.  This way, I can quickly make and update a .csv in Excel of only the paired birds (based on color bands).

 

 

Even if there was a description string search function in Google Earth Pro (or other organization-centric GIS/waypoint software), this method is more

robust, as I can work immediately with the output as a data frame in R, rather than a list of results.

 

First, open an instance of QGIS.  I am running ~2.8 on OSX.  Add a vector layer of your KML.

“Command-A” in the point dialog to select all before import!

Next, under “Vector”, select “Merge vector layers” via Data Management Tools.

 

Select CSV and elect to save the file instead of use a temporary/scratch file (this is a common error).

Open your csv in Excel for verification! 

 

 

 

 

 

 

The R bit:

# query for paired birds

#EDIT:  Libraries
library(data.table)
library(tidyverse)

data <- data.frame(fread("Bird_CSV.csv"))

pair_rows <- contains("pair", vars = data$description)

fem_rows <- contains("fem", vars = data$description)

result <- combine(pair_rows, fem_rows)

result <- data[result,]

write_csv(result, "Paired_Birds.csv")

Tada!

 

 

 

 

 

 

 

 

-Jess

Solar upgrades!

Solar upgrades!

Incredibly, the hut we are working from actually had another solar panel just laying around.  🙂
This 50w square panel had a junction box with MC4 connectors, the standard for small scale solar installations.  As I was unsure how to know when we are running low on electricity reserves, I decided to make some adjustments.

Additional 50w solar panel

(Everything is still solder, hot glue, alligator clips, and zip-ties I’m afraid…)
I traded my NEMA / USA two-prong connection with two MC4 splitters, such that both panels can run in parallel (into a standard USA 110v extension cord that goes into our hut).  This way we should make well over one of the two 35ah batteries-worth of electricity a day.

Dual MC4 splitters to extension cord

I also added a cheap 12v battery level indicator.  It is not very accurate (as it fluctuates with solar input) but it does give us some insight about how much "juice" we have available.  (I also wired and glued the remote-on switch to the back of the input for stability.)

Added battery indicator and button

🙂
-Jess

Gathering point data using Compass 55 on Apple iOS

Keeping track of birds is tricky!

Attached is our team's workflow with Compass 55.    From the Kml, we go into Google Earth Pro - ArcGIS Desktop (arcmap).   QGIS is sometimes used too.

 

Cheers,

-Jess

 

840 Watts of Solar Power!

Equipment used:

Inverter/PWM Controller:  http://a.co/fdl9YzI

2x 35ah Batteries: http://a.co/5JBIxTC

100w solar panel:  http://a.co/5JBIxTC

We need power!  While doing bird research in the wilds of northern NH, it became evident we needed electricity to power computers, big cameras, and phones/GPS units.

Below is a table of the system and our expected electricity needs:

System Solar 100w 35ah universal (x2)
Ah per day: 33.33333333 35 TOTAL Ah Reserve: 70
V 12 12 Parallel wiring: 12v
Wh in: 400 420 TOTAL Wh Reserve: 840
W 100
Cost $105.00 $64.00
ah/$ 2
Sun Hour / Multiplier 4 2
Need/Day Wh multiplier consump. in Wh = 259.36
Computer 100 2.5 250
iPhone 1.7 2 3.4
AAs 11.2 0.3 3.36
Camera 2.6 1 2.6

*The milk crate system below can charge a 100 watt MacBook Pro around 8-9 times from being completely empty.  

**Remember:  V*A=W,  W/V=A, and Watts over time is Wh.  

-Jess

+/- relates to size of standard prongs

Parallel maintains 12v but doubles Ah. (Series would go to 24v at 35ah)

 

Intro to the AWS Cloud 9 IDE

The Cloud 9 IDE is the fastest way I have come up with to develop web-based or otherwise "connected" programs.    Because it lives on a Linux-based EC2 server on AWS, running different node, html, etc programs that rely on a network system just work- it is all already on a network anyway.   🙂  There is no downtime trying to figure out your WAMP, MAMP, Apache, or localhost situation.

Similarly, other network programs work just as well-  I am running a MySQL server over here (RDS), storage over there (S3), and have various bits in Github and locally.   Instead of configuring local editors, permissions, and computer ports and whatnot, you are modifying the VPC security policies and IAM groups- though generally, it just works.

Getting going:   The only prerequisite is you have an AWS account.  Students:  get $40 EC2 dollars below:

https://aws.amazon.com/education/awseducate/
Open the cloud 9 tab under services.

 

 

Setup is very fast- just know if others are going to be editing to, understand the IAM policies and what VPC settings you actually want.

 

Know this ideally a browser-based service; I have tried to come up with a reason a SSH connection would be better and didn't get any where.

For one person, micro is fine.   Know these virtual "RAMs" and "CPUs" are generous....

 

 

 

 

The default network settings are set up for you.   This follows good practice for one person; more than that (or if you are perhaps a far-travelling person) note these settings.  They are always editable under the VPC and EC2 instance tabs.

 

 

That's it!   Other use things to know:

This is a linux machine maintained by Amazon.   Packages you think should work and be up to date (arguably like any other linux machine I guess...)  may not be.  Check your basics like the NPM installer and versions of what your going to be working on, it very well may be different than what you are used to.

In the editor:

You have two panels of workspace in the middle- shown is node and HTML.   Everything is managed by tabs- all windows can have as much stuff as you want this way.

Below there is a "runner" (shown with all the default options!) and a terminal window.  Off to the left is a generic file manager.

 

 

I hope this is useful, it sure is great for me.

-Jess

Using ESRI ArcGIS / ArcMap in the AWS Cloud

Selling AWS to... myself   🙂

Why struggle with underpowered local machines and VMs or watered-down web platforms for heavy lifting,  learning and work?

In addition to using ESRI software on mac computers, I am a big fan of the AWS WorkSpaces service (in addition to all their other developer tools, some of which are map-relevant: RDS for SQL and EC2 Redhat servers  for data management for example ).

Basically, for between ~$20 and ~$60 a month (Max, and not factoring in EDU discounts!), a user gets to use a well-oiled remote desktop.   You can download and license desktop apps like ArcMap and GIS products, file managers, and more from any computer connected to the internet.  This service is not very savvy; you make/receive a password and log right in.

A  big plus here of course is the Workspaces Application Manager (WAM); small sets of licenses can be administered in the same way desktops would, with extra easiness due to the "they are already really the same cloud thing any way".

Another plus is any client- netbook, macbook, VM, etc- will work equally well.  In this regard it can be a very cheap way to get big data work done on otherwise insufficient machines.  Local storage and file systems work well with the client application, with the caveat being network speed.

🙂

https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces.html

 

 

 

 

 

 

« Older posts