Trans Scend Survival

Trans: Latin prefix implying "across" or "Beyond", often used in gender nonconforming situations – Scend: Archaic word describing a strong "surge" or "wave", originating with 15th century english sailors – Survival: 15th century english compound word describing an existence only worth transcending.

Category: Good Ideas (page 1 of 6)

Google Calendar API with Chapel & Python

Mirroring the repo here:

Google Calendar API with Chapel & Python

Despite Chapel's many quirks and annoyances surrounding string handling, its efficiency and ease of running in parallel are always welcome. The idea here is a Chapel script that may need to weed through enormous numbers of files while looking for a date tag ($D + other tags currently) is probably a better choice overall than a pure Python version. (the intent is to test this properly later.)

As of 9/19/19, there is still a laundry list of things to add- control flow (for instance, “don't add the event over and over”) less brittle syntax, annotations, actual error handling, etc. It does find and upload calendar entries though!

I am using Python with the Google Calendar API (see here: https://developers.google.com/calendar/v3/reference/) in a looping Daemon thread. All the sifting for tags is managed with the Chapel binary, which dumps anything it finds into a csv from which the daemon will push calendar entries with proper formatting. FWIW, Google’s dates (datetime.datetime) adhere to RFC3339 (https://tools.ietf.org/html/rfc3339) which conveniently is the default of the datetime.isoformat() method.

Some pesky things to keep in mind:

This script uses a sync$ variable to lock other threads out of an evaluation during concurrency. So far I think the easiest way to manage the resulting domain is from within a module like so:

module charMatches {
  var dates : domain(string);
}

Here, domain charMatches.dates will need to accessed as a reference variable from any procedures that need it.

proc dateCheck(aFile, ref choice) {
    ...
}
... 
coforall folder in walkdirs('check/') {
    for file in findfiles(folder) {
        dateCheck(file, charMatches.dates);
    }
}

errors like:

error: unresolved call '_ir_split__ref_string.size'

unresolved call 'norm(promoted expression)'

...or other variants of:
string.split().size  (length, etc)

...Tie into a Chapel Specification issue.

https://github.com/chapel-lang/chapel/issues/7982

The short solution is do not use .split; instead, I have been chopping strings with .partition().

// like so:
...
if choice.contains(line.partition(hSep)[3].partition(hTerminate)[1]) == false {
    ...process string...
    ...
}

Summer 2019 Update!

GIS Updates:

Newish Raster / DEM image → STL tool in the Shiny-Apps repo:

https://github.com/Jesssullivan/Shiny-Apps

See the (non-load balanced!) live example on the Heroku page:

https://kml-tools.herokuapp.com/

Summarized for a forum member here too:  https://www.v1engineering.com/forum/topic/3d-printing-tactile-maps/

CAD / CAM Updates:

Been revamping my CNC thoughts- 

Basically, the next move is a complete rebuild (primarily for 6061 aluminum).

I am aiming for:

  • Marlin 2.x.x around either a full-Rambo or 32 bit Archim 1.0 (https://ultimachine.com/
  • Dual endstop configuration, CNC only (no hotend support)
  • 500mm2 work area / swappable spoiler boards (~700mm exterior MPCNC conduit length)
  • Continuous compressed air chip clearing, shop vac / cyclone chip removal
  • Two chamber, full acoustic enclosure (cutting space + air I/O for vac and compressor)
  • Full octoprint networking via GPIO relays

FWIW: Sketchup MPCNC:

https://3dwarehouse.sketchup.com/model/72bbe55e-8df7-42a2-9a57-c355debf1447/MPCNC-CNC-Machine-34-EMT

Also TinkerCAD version:

https://www.tinkercad.com/things/fnlgMUy4c3i

Electric Drivetrain Development:

BORGI / Axial Flux stuff:

https://community.occupycars.com/t/borgi-build-instructions/37

Designed some rough coil winders for motor design here:

https://community.occupycars.com/t/arduino-coil-winder/99

Repo:  https://github.com/Jesssullivan/Arduino_Coil_Winder

Also, an itty-bitty, skate bearing-scale axial flux / 3-phase motor to hack upon:

https://www.tinkercad.com/things/cTpgpcNqJaB


Cheers-

– Jess

Notes on a Free and Open Source Notes App:  Joplin

Joplin for all your Operating Systems and devices

As a lifelong IOS + OSX user (Apple products), I have used many, many notes apps over the years.  From big name apps like OmniFocus, Things 3, Notes+, to all the usual suspects like Trello, Notability, Notemaster, RTM, and others, I always eventually migrate back to Apple notes, simply because it is always available and always up to date.  There are zero “features” besides this convenience, which is why I am perpetually willing to give a new app a spin.

Joplin is free, open source, and works on OSX, Windows, Linux operating systems and IOS and Android phones.  

Find it here:

https://joplin.cozic.net/

brew install joplin 

The most important thing this project has nailed is cloud support and syncing.  I have my iPhone and computers syncing via Dropbox, which is easy to setup and works….  really well. Joplin folks have added many cloud options, so this is unlikely to be a sticking point for users.

Here are some of the key features:

  • Markdown is totally supported for straightforward and easy formatting
  • External editor support for emacs / atom / etc folks
  • Layout is clean, uncluttered, and just makes sense
  • Built-in markdown text editor and viewer is great
  • Notebook, todo, note, and tags work great across platforms
  • Browser integration, E2EE security, file attachments, and geolocation included

Hopefully this will be helpful.

Cheers,

– Jess

Mac OSX: Fixing GPT and PMBR Tables

My computer recently crashed very, very hard, while I was removing an small empty alternative OS partition I no longer needed.  This is a fairly mundane operation that I do now and again, and is a ongoing fight to keep at least a few gigs of space free for actual work on precious 250gb Mac SSD.  

The crash results?  Toasted GPT tables all around.   My 2015 computer’s next move was to reboot- only to find essentially no partitions of memory… at all.  What it did show was (wait for it) Clover bootloader of all things, with a single windows boot camp icon (nothing in there either).  That is so wrong…. On all levels!

I brought the machine to the local university repair.  They declared this machine bricked and offered to wipe it.  Back to me it came…

I scheduled an Apple support session with a phone rep, which after around 45 minutes of actually productive troubleshooting ideas (none helping though) was forwarded to a senior supervisor.  She was interested in this problem, and we scheduled a larger block of time. But, in the meantime, I still wanted to try again….

How to recover a garbled GPT table for Mac OSX:

Start with clean SMC and PRAM / NVRAM.

Clearing these actually made accessing internet recovery (how we get to a stand-in OS with a terminal) dozens of times faster.  2.5 hours to 7 minutes. I actually waited 2.5 hours twice on separate attempts before I cleared these.

Follow these Apple links to perform these operations:

https://support.apple.com/en-us/HT204063

https://support.apple.com/en-us/HT201295

Get the computer with a text editor open.

Restart the computer into internet recovery.  Command + R or Command + Shift + R.

Wait.

Open a Terminal.  The graphical disk utility is useless because the disk / partition we want is unreachable(so it will say everything is great).

Run:

diskutil list

For me, I see disk0s2 is 180.6 gb.  That’s my stuff!

I also found /dev/disk2 → /dev/disk14 to be tiny partitions- don’t worry about those.

The syntax you are looking for is:

Name: “untitled” Identifier: disk#

(NOT disk#s#)

Write down ALL of the above information for the disk you are after.  That is probably disk0.

Then:

gpt -r show disk0

Copy the following readout in your terminal for all entries bigger than “32”.  The critical fields here are Start, Size, Index, and Contents. Each field is supremely important.

Here is mine (formatted for web):

# Disk0, with contents > “32” :

# First Table:

Start: 40  

Size: 409600

Index:  1

Contents: C12A7328-F81F-11D2-BA4B-00A0C93EC93B

# Second table, the one with my data:

Start: 409640

Size: 352637568

Index: 2

Contents: FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF

Note, this is the initial Contents.  I rewrote this once with the correct Apple Index 2 data but did not create a new table (leaving the rest of the broken bits broken).  We are replacing / destroying a table here, but not the data.     

Actions:

# unmount the disk.  From here we are doing tables, not disks / data.

diskutil unmountDisk disk0

# Get rid of the GPT on the disk we are recovering.  We are not touching the data.

gpt destroy disk0

# Make a new one to start with some fresh values.

gpt create -f disk0

# perform magic trick

# USE THE DATA YOU WROTE DOWN FROM “gpt -r show disk0”.  THIS IS IMPORTANT.

# we must add that first small partition at index 1.  Verbatim.

gpt add -i 1 -b 40 -s 409600 -t C12A7328-F81F-11D2-BA4B-00A0C93EC93B disk0

# index two (for me) is my data.  We are going to use the default OSX / Mac HD partition values.

# the Length of “372637568” is not as sure fire as the GPT Contents.  

# YMMV, but YOLO.

gpt add -i 2 -b 409640 -s 372637568 -t 7C3457EF-0000-11AA-AA11-00306543ECAC disk0

Again, that Contents value is 7C3457EF-0000-11AA-AA11-00306543ECAC.

– Jess

written in the recovered computer xD

Musings On Chapel Language and Parallel Processing

View below the readme mirror from my Github repo. Scroll down for my Python3 evaluation script.

….Or visit the page directly: https://github.com/Jesssullivan/ChapelTests 

ChapelTests

Investigating modern concurrent programming ideas with Chapel Language and Python 3

See here for dupe detection: /FileChecking-with-Chapel

Iterating through all files for custom tags / syntax: /GenericTagIterator

added 9/14/19:

The thinking here is one could write a global, shorthand / tag-based note manager making use of an efficient tag gathering tool like the example here. Gone are the days of actually needing a note manager- when the need presents itself, one could just add a calendar item, todo, etc with a global tag syntax.

The test uses $D for date: $D 09/14/19

//  Chapel-Language  //

// non-annotated file @ /GenericTagIterator/nScan.chpl //

use FileSystem;
use IO;
use Time;

config const V : bool=true;  // verbose logging, currently default!

module charMatches {
  var dates = {("")};  
}

// var sync1$ : sync bool=true;  not used in example- TODO: add sync$ var back in!!

proc charCheck(aFile, ref choice, sep, sepRange) {

    // note, reference argument (ref choice) is needed if using Chapel structure "module.domain"

    try {
        var line : string;
        var tmp = openreader(aFile);
        while(tmp.readline(line)) {
            if line.find(sep) > 0 {
                choice += line.split(sep)[sepRange];
                if V then writeln('adding '+ sep + ' ' + line.split(sep)[sepRange]);
            }
        }
    tmp.close();
    } catch {
      if V then writeln("caught err");
    }
}

coforall folder in walkdirs('check/') {
    for file in findfiles(folder) {
        charCheck(file, charMatches.dates, '$D ', 1..8);
    }
}

Get some Chapel:

In a (bash) shell, install Chapel:
Mac or Linux here, others refer to:

https://chapel-lang.org/docs/usingchapel/QUICKSTART.html

# For Linux bash:
git clone https://github.com/chapel-lang/chapel
tar xzf chapel-1.18.0.tar.gz
cd chapel-1.18.0
source util/setchplenv.bash
make
make check

#For Mac OSX bash:
# Just use homebrew
brew install chapel # :)

Get atom editor for Chapel Language support:

#Linux bash:
cd
sudo apt-get install atom
apm install language-chapel
# atom [yourfile.chpl]  # open/make a file with atom

# Mac OSX (download):
# https://github.com/atom/atom
# bash for Chapel language support
apm install language-chapel
# atom [yourfile.chpl]  # open/make a file with atom

Using the Chapel compiler

To compile with Chapel:

chpl MyFile.chpl # chpl command is self sufficient

# chpl one file class into another:

chpl -M classFile runFile.chpl

# to run a Chapel file:
./runFile

Now Some Python3 Evaluation:

# Ajacent to compiled FileCheck.chpl binary:

python3 Timer_FileCheck.py

Timer_FileCheck.py will loop FileCheck and find the average times it takes to complete, with a variety of additional arguments to toggle parallel and serial operation. The iterations are:

ListOptions = [Default, Serial_SE, Serial_SP, Serial_SE_SP]
  • Default – full parallel

  • Serial evaluation (–SE) but parallel domain creation

  • Serial domain creation (–SP) but parallel evaluation

  • Full serial (–SE –SP)

Output is saved as Time_FileCheck_Results.txt

  • Output is also logged after each of the (default 10) loops.

The idea is to evaluate a “–flag” -in this case, Serial or Parallel in FileCheck.chpl- to see of there are time benefits to parallel processing. In this case, there really are not any, because that program relies mostly on disk speed.

Evaluation Test:

# Time_FileCheck.py
#
# A WIP by Jess Sullivan
#
# evaluate average run speed of both serial and parallel versions
# of FileCheck.chpl  --  NOTE: coforall is used in both BY DEFAULT.
# This is to bypass the slow findfiles() method by dividing file searches
# by number of directories.

import subprocess
import time

File = "./FileCheck" # chapel to run

# default false, use for evaluation
SE = "--SE=true"

# default false, use for evaluation
SP = "--SP=true" # no coforall looping anywhere

# default true, make it false:
R = "--R=false"  #  do not let chapel compile a report per run

# default true, make it false:
T = "--T=false" # no internal chapel timers

# default true, make it false:
V = "--V=false"  #  use verbose logging?

# default is false
bug = "--debug=false"

Default = (File, R, T, V, bug) # default parallel operation
Serial_SE = (File, R, T, V, bug, SE)
Serial_SP = (File, R, T, V, bug, SP)
Serial_SE_SP = (File, R, T, V, bug, SP, SE)


ListOptions = [Default, Serial_SE, Serial_SP, Serial_SE_SP]

loopNum = 10 # iterations of each runTime for an average speed.

# setup output file
file = open("Time_FileCheck_Results.txt", "w")

file.write(str('eval ' + str(loopNum) + ' loops for ' + str(len(ListOptions)) + ' FileCheck Options' + "\n\\"))

def iterateWithArgs(loops, args, runTime):
    for l in range(loops):
        start = time.time()
        subprocess.run(args)
        end = time.time()
        runTime.append(end-start)

for option in ListOptions:
    runTime = []
    iterateWithArgs(loopNum, option, runTime)
    file.write("average runTime for FileCheck with "+ str(option) + "options is " + "\n\\")
    file.write(str(sum(runTime) / loopNum) +"\n\\")
    print("average runTime for FileCheck with " + str(option) + " options is " + "\n\\")
    print(str(sum(runTime) / loopNum) +"\n\\")

file.close()

« Older posts