Trans Scend Survival

Trans: Latin prefix implying "across" or "Beyond", often used in gender nonconforming situations – Scend: Archaic word describing a strong "surge" or "wave", originating with 15th century english sailors – Survival: 15th century english compound word describing an existence only worth transcending.

Category: Lifestyle/How-To (page 1 of 8)

Summer 2019 Update!

GIS Updates:

Newish Raster / DEM image → STL tool in the Shiny-Apps repo:

https://github.com/Jesssullivan/Shiny-Apps

See the (non-load balanced!) live example on the Heroku page:

https://kml-tools.herokuapp.com/

Summarized for a forum member here too:  https://www.v1engineering.com/forum/topic/3d-printing-tactile-maps/

CAD / CAM Updates:

Been revamping my CNC thoughts- 

Basically, the next move is a complete rebuild (primarily for 6061 aluminum).

I am aiming for:

  • Marlin 2.x.x around either a full-Rambo or 32 bit Archim 1.0 (https://ultimachine.com/
  • Dual endstop configuration, CNC only (no hotend support)
  • 500mm2 work area / swappable spoiler boards (~700mm exterior MPCNC conduit length)
  • Continuous compressed air chip clearing, shop vac / cyclone chip removal
  • Two chamber, full acoustic enclosure (cutting space + air I/O for vac and compressor)
  • Full octoprint networking via GPIO relays

FWIW: Sketchup MPCNC:

https://3dwarehouse.sketchup.com/model/72bbe55e-8df7-42a2-9a57-c355debf1447/MPCNC-CNC-Machine-34-EMT

Also TinkerCAD version:

https://www.tinkercad.com/things/fnlgMUy4c3i

Electric Drivetrain Development:

BORGI / Axial Flux stuff:

https://community.occupycars.com/t/borgi-build-instructions/37

Designed some rough coil winders for motor design here:

https://community.occupycars.com/t/arduino-coil-winder/99

Repo:  https://github.com/Jesssullivan/Arduino_Coil_Winder

Also, an itty-bitty, skate bearing-scale axial flux / 3-phase motor to hack upon:

https://www.tinkercad.com/things/cTpgpcNqJaB


Cheers-

– Jess

Soundcloud Stream

Notes on a Free and Open Source Notes App:  Joplin

Joplin for all your Operating Systems and devices

As a lifelong IOS + OSX user (Apple products), I have used many, many notes apps over the years.  From big name apps like OmniFocus, Things 3, Notes+, to all the usual suspects like Trello, Notability, Notemaster, RTM, and others, I always eventually migrate back to Apple notes, simply because it is always available and always up to date.  There are zero “features” besides this convenience, which is why I am perpetually willing to give a new app a spin.

Joplin is free, open source, and works on OSX, Windows, Linux operating systems and IOS and Android phones.  

Find it here:

https://joplin.cozic.net/

brew install joplin 

The most important thing this project has nailed is cloud support and syncing.  I have my iPhone and computers syncing via Dropbox, which is easy to setup and works….  really well. Joplin folks have added many cloud options, so this is unlikely to be a sticking point for users.

Here are some of the key features:

  • Markdown is totally supported for straightforward and easy formatting
  • External editor support for emacs / atom / etc folks
  • Layout is clean, uncluttered, and just makes sense
  • Built-in markdown text editor and viewer is great
  • Notebook, todo, note, and tags work great across platforms
  • Browser integration, E2EE security, file attachments, and geolocation included

Hopefully this will be helpful.

Cheers,

– Jess

Mac OSX: Fixing GPT and PMBR Tables

My computer recently crashed very, very hard, while I was removing an small empty alternative OS partition I no longer needed.  This is a fairly mundane operation that I do now and again, and is a ongoing fight to keep at least a few gigs of space free for actual work on precious 250gb Mac SSD.  

The crash results?  Toasted GPT tables all around.   My 2015 computer’s next move was to reboot- only to find essentially no partitions of memory… at all.  What it did show was (wait for it) Clover bootloader of all things, with a single windows boot camp icon (nothing in there either).  That is so wrong…. On all levels!

I brought the machine to the local university repair.  They declared this machine bricked and offered to wipe it.  Back to me it came…

I scheduled an Apple support session with a phone rep, which after around 45 minutes of actually productive troubleshooting ideas (none helping though) was forwarded to a senior supervisor.  She was interested in this problem, and we scheduled a larger block of time. But, in the meantime, I still wanted to try again….

How to recover a garbled GPT table for Mac OSX:

Start with clean SMC and PRAM / NVRAM.

Clearing these actually made accessing internet recovery (how we get to a stand-in OS with a terminal) dozens of times faster.  2.5 hours to 7 minutes. I actually waited 2.5 hours twice on separate attempts before I cleared these.

Follow these Apple links to perform these operations:

https://support.apple.com/en-us/HT204063

https://support.apple.com/en-us/HT201295

Get the computer with a text editor open.

Restart the computer into internet recovery.  Command + R or Command + Shift + R.

Wait.

Open a Terminal.  The graphical disk utility is useless because the disk / partition we want is unreachable(so it will say everything is great).

Run:

diskutil list

For me, I see disk0s2 is 180.6 gb.  That’s my stuff!

I also found /dev/disk2 → /dev/disk14 to be tiny partitions- don’t worry about those.

The syntax you are looking for is:

Name: “untitled” Identifier: disk#

(NOT disk#s#)

Write down ALL of the above information for the disk you are after.  That is probably disk0.

Then:

gpt -r show disk0

Copy the following readout in your terminal for all entries bigger than “32”.  The critical fields here are Start, Size, Index, and Contents. Each field is supremely important.

Here is mine (formatted for web):

# Disk0, with contents > “32” :

# First Table:

Start: 40  

Size: 409600

Index:  1

Contents: C12A7328-F81F-11D2-BA4B-00A0C93EC93B

# Second table, the one with my data:

Start: 409640

Size: 352637568

Index: 2

Contents: FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF

Note, this is the initial Contents.  I rewrote this once with the correct Apple Index 2 data but did not create a new table (leaving the rest of the broken bits broken).  We are replacing / destroying a table here, but not the data.     

Actions:

# unmount the disk.  From here we are doing tables, not disks / data.

diskutil unmountDisk disk0

# Get rid of the GPT on the disk we are recovering.  We are not touching the data.

gpt destroy disk0

# Make a new one to start with some fresh values.

gpt create -f disk0

# perform magic trick

# USE THE DATA YOU WROTE DOWN FROM “gpt -r show disk0”.  THIS IS IMPORTANT.

# we must add that first small partition at index 1.  Verbatim.

gpt add -i 1 -b 40 -s 409600 -t C12A7328-F81F-11D2-BA4B-00A0C93EC93B disk0

# index two (for me) is my data.  We are going to use the default OSX / Mac HD partition values.

# the Length of “372637568” is not as sure fire as the GPT Contents.  

# YMMV, but YOLO.

gpt add -i 2 -b 409640 -s 372637568 -t 7C3457EF-0000-11AA-AA11-00306543ECAC disk0

Again, that Contents value is 7C3457EF-0000-11AA-AA11-00306543ECAC.

– Jess

written in the recovered computer xD

Musings On Chapel Language and Parallel Processing

View below the readme mirror from my Github repo. Scroll down for my Python3 evaluation script.

….Or visit the page directly: https://github.com/Jesssullivan/ChapelTests 

ChapelTests

Investigating modern concurrent programming ideas with Chapel Language and Python 3

Repo in light of PSU OS course 🙂

Test FileCheck.chpl from this repo:


git clone https://github.com/Jesssullivan/ChapelTests cd ChapelTests/FileChecking-with-Chapel # compile fastest / most up to date script: chpl FileCheck2.chpl # not annotated / no extra --args # compile all options (old sync method): chpl FileCheck.chpl # evaluate 5 different run times: python3 Timer_FileCheck.py

These two FileCheck scripts provide both parallel and serial methods for recursive duplicate file finding in Cray’s Chapel Language. All solutions will be “slow”, as they are fundamentally limited by disk speed.

Revision 2 uses standard sync$ variable form.

Use Timer_FileCheck.py to evaluate completion times for all Serial and parallel options. Go to /ChapelTesting-Python3/ for more information on these tests.

To run:

# In Parallel:
chpl FileCheck.chpl && ./FileCheck
# or:
chpl FileCheck2.chpl && ./FileCheck2

Dealing with Dupes in Chapel

Generate three text docs:

  • Same size, same file another
  • Same size, different file
  • Same size, less than 8 bytes

Please see the python3 evaluation scripts to run these options in a loop.

--Flags:

example:

./FileCheck --V --T --debug

...Will run FileCheck with internal timers(--T), which will be displayed with the verbose logs(--V) and all extra debug logging(--debug) from within each loop.

All config --Flags:

// serial options:
config const SE : bool=false; // use serial evaluation?
config const SP : bool=false; // use findfiles() as mastserDom method?

// logging options
config const V : bool=true; // Vebose output of actions?
config const debug : bool=false;  // enable verbose logging from within loops?
config const T : bool=true; // use internal Chapel timers?
config const R : bool=true; // compile report file?

// file options
config const dir = "."; // start here?
config const ext = ".txt";  // use alternative ext?
config const SAME = "SAME";  // default name ID?
config const DIFF = "DIFF"; // default name ID?

General notes:

From inside FileCheck2.chpl on updated sync$ syntax:

//  Chapel-Language  //

module Fs {
  var MasterDom = {("", "")};  // contains same size files as (a,b).
  var same = {("", "")};  // identical files
  var diff = {("", "")};  // sorted files flagged as same size but are not identical
  var sizeZero = {("", "")}; // sort files that are < 8 bytes
}

var sync1$ : sync bool;
sync1$ = true;

proc ParallelRun(a,b) {
  if exists(a) && exists(b) && a != b {
    if isFile(a) && isFile(b) {
      if getFileSize(a) == getFileSize(b) {
        sync1$;
        Fs.MasterDom += (a,b);
        sync1$ = true;
        }
        if getFileSize(a) < 8 && getFileSize(b) < 8 {
          sync1$;
          Fs.sizeZero += (a,b);
          sync1$ = true;
      }
    }
  }
}
coforall folder in walkdirs(".") {
  for a in findfiles(folder, recursive=false) {
    for b in findfiles(folder, recursive=false) {
      ParallelRun(a,b);
    }
  }
}

Get some Chapel:

In a (bash) shell, install Chapel:
Mac or Linux here, others refer to:

https://chapel-lang.org/docs/usingchapel/QUICKSTART.html

# For Linux bash:
git clone https://github.com/chapel-lang/chapel
tar xzf chapel-1.18.0.tar.gz
cd chapel-1.18.0
source util/setchplenv.bash
make
make check

#For Mac OSX bash:
# Just use homebrew
brew install chapel # :)

Get atom editor for Chapel Language support:

#Linux bash:
cd
sudo apt-get install atom
apm install language-chapel
# atom [yourfile.chpl]  # open/make a file with atom

# Mac OSX (download):
# https://github.com/atom/atom
# bash for Chapel language support
apm install language-chapel
# atom [yourfile.chpl]  # open/make a file with atom

Using the Chapel compiler

To compile with Chapel:

chpl MyFile.chpl # chpl command is self sufficient

# chpl one file class into another:

chpl -M classFile runFile.chpl

# to run a Chapel file:
./runFile

Now Some Python3 Evaluation:

# Ajacent to compiled FileCheck.chpl binary:

python3 Timer_FileCheck.py

Timer_FileCheck.py will loop FileCheck and find the average times it takes to complete, with a variety of additional arguments to toggle parallel and serial operation. The iterations are:

ListOptions = [Default, Serial_SE, Serial_SP, Serial_SE_SP]
  • Default – full parallel

  • Serial evaluation (–SE) but parallel domain creation

  • Serial domain creation (–SP) but parallel evaluation

  • Full serial (–SE –SP)

Output is saved as Time_FileCheck_Results.txt

  • Output is also logged after each of the (default 10) loops.

The idea is to evaluate a “–flag” -in this case, Serial or Parallel in FileCheck.chpl- to see of there are time benefits to parallel processing. In this case, there really are not any, because that program relies mostly on disk speed.

Evaluation Test:

# Time_FileCheck.py
#
# A WIP by Jess Sullivan
#
# evaluate average run speed of both serial and parallel versions
# of FileCheck.chpl  --  NOTE: coforall is used in both BY DEFAULT.
# This is to bypass the slow findfiles() method by dividing file searches
# by number of directories.

import subprocess
import time

File = "./FileCheck" # chapel to run

# default false, use for evaluation
SE = "--SE=true"

# default false, use for evaluation
SP = "--SP=true" # no coforall looping anywhere

# default true, make it false:
R = "--R=false"  #  do not let chapel compile a report per run

# default true, make it false:
T = "--T=false" # no internal chapel timers

# default true, make it false:
V = "--V=false"  #  use verbose logging?

# default is false
bug = "--debug=false"

Default = (File, R, T, V, bug) # default parallel operation
Serial_SE = (File, R, T, V, bug, SE)
Serial_SP = (File, R, T, V, bug, SP)
Serial_SE_SP = (File, R, T, V, bug, SP, SE)


ListOptions = [Default, Serial_SE, Serial_SP, Serial_SE_SP]

loopNum = 10 # iterations of each runTime for an average speed.

# setup output file
file = open("Time_FileCheck_Results.txt", "w")

file.write(str('eval ' + str(loopNum) + ' loops for ' + str(len(ListOptions)) + ' FileCheck Options' + "\n\\"))

def iterateWithArgs(loops, args, runTime):
    for l in range(loops):
        start = time.time()
        subprocess.run(args)
        end = time.time()
        runTime.append(end-start)

for option in ListOptions:
    runTime = []
    iterateWithArgs(loopNum, option, runTime)
    file.write("average runTime for FileCheck with "+ str(option) + "options is " + "\n\\")
    file.write(str(sum(runTime) / loopNum) +"\n\\")
    print("average runTime for FileCheck with " + str(option) + " options is " + "\n\\")
    print(str(sum(runTime) / loopNum) +"\n\\")

file.close()

« Older posts