Compare commits

...

132 Commits

Author SHA1 Message Date
6c20bde56b Re-enable comments 2024-07-22 15:27:42 +00:00
32437054a2 Disable comment and subscribe forms
I'm migrating servers and too lazy to set this up again, hardly anyone
used it anyways.
2024-07-16 00:32:34 -04:00
5dda9d0080 Update resume for 2023 2023-07-21 14:22:10 -04:00
c4aac84f75
Update subtle patterns url
I keep getting emails to do this.
2023-02-16 15:18:23 -05:00
f80ce959a2 Update resume 2022-12-16 11:33:42 -05:00
0e146bf113 Make blog post public 2022-10-05 22:27:20 -04:00
1814c887ee Reduce some repetition in the image captions 2022-10-05 21:21:50 -04:00
6735739926 New blog post about modmapper in hidden mode 2022-10-05 20:05:01 -04:00
bbb7af595a Lighten dark mode images 2022-10-05 18:43:00 -04:00
110fee1fc5 Improve blog post headers with flexbox 2022-10-05 18:42:35 -04:00
afefaadabd Update ruby version, Gemfile, and post CSS 2022-09-26 00:57:23 -04:00
724daefe72 Fix page background color in dark theme 2022-08-17 18:16:42 -04:00
9f3a4fc64e Update profile photo 2022-01-04 14:38:23 -05:00
3b6515dff5 Update resume 2021-08-31 11:18:28 -04:00
3b2ca96072 Remove z-index on canvas 2021-08-01 01:15:44 -04:00
771383a6bf Fix class attribute typo on home template 2021-08-01 01:11:49 -04:00
a617232868 Fix dark mode background across chrome and firefox 2021-08-01 01:03:13 -04:00
ee40b651a9 Fix dark mode 2021-07-31 23:46:00 -04:00
8b8bfdb4a2 Update profile photos 2021-04-27 12:44:13 -04:00
1e6e774792 Update resume for 2021 2021-01-19 11:25:34 -05:00
e09235ef4d Switch to Cloudflare Web Analytics 2020-12-09 12:02:01 -05:00
5e09843db8 Update home page for Outcomes4Me 2020-09-11 23:58:45 -04:00
7f82a6e110 I can write JSON. I promise. 2020-05-08 00:21:05 -04:00
b1f1b30cc0 Add .well-known for matrix server 2020-05-07 23:37:46 -04:00
7b827f88e2 Second resume update for March 2020 2020-03-23 13:20:05 -04:00
b13cd62712 Resume update for March 2020 2020-03-09 00:03:04 -04:00
3afb7e267d Add image to last blog post 2020-02-01 21:08:58 -05:00
aeceab76e1 Make github pages redeploy 2020-02-01 19:07:28 -05:00
87e7ad3a3d New blog post: icosahedrons and hexspheres in rust 2020-02-01 19:06:09 -05:00
b7699ccdc6 More styling tweaks 2020-01-05 19:12:43 -05:00
448c6b86b4 Styling tweaks
Make theme switcher a toggle.
2020-01-05 18:57:45 -05:00
75f288b222 Merge branch 'master' of github.com:thallada/thallada.github.io 2020-01-05 18:45:43 -05:00
c7ab879654 Move theme switcher to bottom in home 2020-01-05 18:45:26 -05:00
cccd620cec Create CNAME 2020-01-05 18:42:14 -05:00
af2aa63769 Delete CNAME 2020-01-05 18:41:32 -05:00
7a800f6b12 Update CNAME 2020-01-05 18:38:52 -05:00
4768d75067 Create CNAME 2020-01-05 18:36:39 -05:00
26a8f62675 Delete CNAME 2020-01-05 18:36:21 -05:00
08488906aa Create CNAME 2020-01-05 18:32:10 -05:00
1fea619511 Delete CNAME 2020-01-05 18:31:57 -05:00
f8639043ff Create CNAME 2020-01-05 18:15:51 -05:00
67bad2a2f8 Add dark theme with theme switcher 2020-01-05 18:09:53 -05:00
097e97b24c Fix typo on main page 2019-11-25 10:11:03 -05:00
12f66b211b Consider is email 2.0 2019-11-22 13:28:30 -05:00
0cc1e0efaf wording tweak 2019-08-27 12:44:49 -04:00
2e60f02a4c Update Consider tagline 2019-08-27 10:26:38 -04:00
edfda530f2 Make edit to Isso post
The Debian package just isn't working for me anymore.
2019-05-28 22:45:22 -04:00
ede52a6df5 Update homepage about me 2018-07-25 10:05:38 -04:00
204bdd4cab Update tokei output for edx-platform
No more CoffeeScript!
2018-05-02 16:09:39 -04:00
44123fbd70 Use jpg instead of png in latest post
Also update assets table screenshot to latest changes.
2018-05-02 16:05:21 -04:00
ac6eb1bf04 Delete CNAME 2018-04-27 00:15:32 -04:00
816995639e Create CNAME 2018-04-27 00:01:59 -04:00
9bd9499793 Add new post about studio-frontend 2018-04-26 23:53:53 -04:00
802d162335 Load resources in https 2018-04-26 23:51:31 -04:00
0ebb6e4268 Specify post excerpts with comment separator 2018-04-26 23:50:52 -04:00
c55aa004df 2018 resume update 2018-03-23 10:38:11 -04:00
6e29547843 Ensure all directory urls end in a trailing slash 2017-11-29 19:45:34 -05:00
d88d2fc61a Delete CNAME 2017-11-29 19:07:59 -05:00
5fd93090f4 Create CNAME 2017-11-29 18:47:08 -05:00
5f93271780 Merge branch 'master' of github.com:thallada/thallada.github.io 2017-11-29 18:34:49 -05:00
c142201dfd Switch to https 2017-11-29 18:34:38 -05:00
c43465dd42 Delete CNAME 2017-11-29 16:37:40 -05:00
8388a94f2c New blog post: Isso comments 2017-11-15 23:52:35 -05:00
0aaf6e7748 Use https on comments.hallada.net 2017-11-15 22:05:54 -05:00
34009ecd61 Add Isso comments to blog 2017-11-15 21:56:48 -05:00
ba8e1100b4 Mail form submit button value and html tag spacing 2017-09-05 21:53:00 -04:00
ea0a6d49f2 New blog post about mailing list 2017-08-30 01:02:32 -04:00
b590422519 Add mailing list sign up form to blog 2017-08-29 18:18:02 -04:00
c419f953b5 Wrap image in link to animation 2017-08-08 11:19:36 -04:00
c97cf7336f grammar 2017-08-08 00:17:12 -04:00
de5bf28641 Apparently titles can't have colons. Thanks jekyll 2017-08-08 00:08:00 -04:00
ca882ed199 Okay, whatever, post it. 2017-08-08 00:04:12 -04:00
45e4f4e91e That post is not hidden. Awkward. 2017-08-08 00:03:22 -04:00
104e56c1ec That's markdown, not html 2017-08-07 23:59:26 -04:00
da634dde04 Draft of new post: proximity structures 2017-08-07 23:56:52 -04:00
71f6822904 Fix link typo, change image for post 2017-07-12 20:08:46 -04:00
0d361737a0 Finally publish blog post 2017-07-11 16:25:01 -04:00
2e7e51d587 Back to flat image url 2017-07-11 15:46:32 -04:00
50b4093068 Try to fix SEO image syntax 2017-07-11 15:30:01 -04:00
8e31f41388 Commit the image 2017-07-11 15:27:52 -04:00
e3f729e918 Try a bigger og:image 2017-07-11 15:27:36 -04:00
31307d04c6 Fix url and images config, add authors file 2017-07-11 15:21:28 -04:00
7fb07fa372 Add SEO tags to all pages 2017-07-11 15:10:00 -04:00
e95350e6c1 Use footnote markdown feature 2017-07-11 15:09:27 -04:00
0662158a12 Draft editing 2017-07-11 10:56:24 -04:00
97713d803b Add new hidden blog post (draft) 2017-07-11 02:09:48 -04:00
d6a1965d55 Exclude hidden posts from feed. 2017-07-11 02:09:28 -04:00
f5b071f84a Center images and tables 2017-07-11 01:51:09 -04:00
0b113e3e05 Fix text highlighting and add table formatting. 2017-07-11 00:46:03 -04:00
0553103fa0 Smaller page margins on mobile 2017-06-21 00:50:12 -04:00
44fe4c60a3 Fix headings in README 2017-06-20 21:33:47 -04:00
17910ed56e Correct nvcc make error message 2017-06-20 18:34:32 -04:00
e3f5e5b8bb Add missing link to tensorflow post 2017-06-20 18:27:07 -04:00
ae939606eb Fix bad code block formatting on tensorflow post 2017-06-20 18:23:52 -04:00
32f1ca311b Fix typo and publish post 2017-06-20 18:20:57 -04:00
42e8b2cd5f Finish draft of tensorflow install post 2017-06-20 18:11:12 -04:00
57fcb12a82 Quick typo edit on draft 2017-06-20 01:55:09 -04:00
1511b102ca Add WIP draft about installing tensorflow 2017-06-20 01:49:23 -04:00
1ae6754302 Add keybase proof to new .well-known folder 2017-02-08 16:03:17 -05:00
8059fa43e1 Update resume with edX, remove phone and address 2016-10-11 12:30:19 -04:00
909430a9b7 New profile pic 2016-06-08 18:08:17 -04:00
7539848957 Update about me text. New job! 2016-06-08 17:49:10 -04:00
f4c110297b Small resume update 2016-05-24 12:44:53 -04:00
307411e26d Resume update: education below experience 2016-05-24 12:30:55 -04:00
21a6a54425 Months instead of seasons on resume 2016-05-23 23:40:50 -04:00
d8cb76de61 More major resume update from feedback 2016-05-23 23:19:39 -04:00
67f6cd56a7 Small resume update 2016-05-18 17:21:39 -04:00
68777c4eb5 Merge branch 'master' of github.com:thallada/thallada.github.io 2016-05-17 15:09:35 -04:00
c609b17be6 Update resume for May 2016. 2016-05-17 15:09:04 -04:00
Tyler Hallada
89b95fd2cb Double quotes in html tags. No link self-end tags. 2016-04-22 18:46:36 -04:00
Tyler Hallada
55bdede69b Replaced bad end br tag 2016-04-22 18:33:53 -04:00
e0e2dc0894 Convert posts to kramdown-style headers 2016-02-03 13:00:56 -05:00
ed344e732f Update to jekyll 3.0 2016-02-03 12:53:10 -05:00
Tyler Hallada
dc13fac7be Update resume with GPA from NEU 2016-02-03 11:37:08 -05:00
4eb6dd4579 I graduated, yay! 2016-01-26 21:23:38 -05:00
Tyler Hallada
2227f2d55c Update resume for 2016 2016-01-20 14:34:15 -05:00
Tyler Hallada
df4772d677 Publish it! 2016-01-06 23:40:03 -05:00
Tyler Hallada
c7d656b94e Edit hidden post 2016-01-06 23:12:07 -05:00
Tyler Hallada
340dedbd1b Add hidden post attribute and add hidden post 2016-01-06 23:08:50 -05:00
Tyler Hallada
dd7418be14 New blog draft 2016-01-06 21:50:51 -05:00
Tyler Hallada
20d613c6de Add some html attributes for more info 2015-08-03 12:59:45 -04:00
c127ae7f07 Add section on Gimp 2015-06-03 23:13:24 -04:00
71ad3f3e60 Last minute edits 2015-06-03 23:06:56 -04:00
701ef5c338 Merge branch 'master' of https://github.com/thallada/thallada.github.io 2015-06-03 22:40:03 -04:00
62a03ab7e1 Update jekyll conf. New post w/ assets. 2015-06-03 22:38:59 -04:00
495db4a81c Change references of m.reddit.com to .mobile 2015-06-01 23:47:33 -04:00
fd63ee7c69 Re-edit w3m-reddit script with .mobile fix 2015-06-01 23:47:29 -04:00
de223c1efa Update w3m-reddit post about new m.reddit.com 2015-06-01 23:47:07 -04:00
Tyler Hallada
4133e61043 Change references of m.reddit.com to .mobile 2015-04-24 16:24:54 -04:00
Tyler Hallada
f565688bcd Re-edit w3m-reddit script with .mobile fix 2015-04-24 16:11:50 -04:00
Tyler Hallada
4825ae89ca Update w3m-reddit post about new m.reddit.com 2015-04-24 15:03:18 -04:00
61b8ceea80 Updating README, add inline hide-desktop/mobile 2015-01-13 15:15:24 -05:00
165 changed files with 4223 additions and 178 deletions

1
.ruby-version Normal file
View File

@ -0,0 +1 @@
3.1.2

54
.well-known/keybase.txt Normal file
View File

@ -0,0 +1,54 @@
==================================================================
https://keybase.io/thallada
--------------------------------------------------------------------
I hereby claim:
* I am an admin of http://www.hallada.net
* I am thallada (https://keybase.io/thallada) on keybase.
* I have a public key ASDOaZj16vBUa4vjFa4dhC7map8qIX5MUSCqjgWeX1CfbQo
To do so, I am signing this object:
{
"body": {
"key": {
"eldest_kid": "0120ce6998f5eaf0546b8be315ae1d842ee66a9f2a217e4c5120aa8e059e5f509f6d0a",
"host": "keybase.io",
"kid": "0120ce6998f5eaf0546b8be315ae1d842ee66a9f2a217e4c5120aa8e059e5f509f6d0a",
"uid": "bf2238122821bbc309d5bf1ed2421d19",
"username": "thallada"
},
"service": {
"hostname": "www.hallada.net",
"protocol": "http:"
},
"type": "web_service_binding",
"version": 1
},
"client": {
"name": "keybase.io go client",
"version": "1.0.18"
},
"ctime": 1486587579,
"expire_in": 504576000,
"merkle_root": {
"ctime": 1486587569,
"hash": "099473519e50871bbe05a57da09e5ba8fa575e8a51e259dd6ad6e8e4ead07fdb63895424659bc01aef600d5caa2ddaf32bd194fec6e08b54f623c0bf381df30c",
"seqno": 846152
},
"prev": "7ec1319be81dde8362b777b669904944eaebd592206c17953c628749b41cf992",
"seqno": 9,
"tag": "signature"
}
which yields the signature:
hKRib2R5hqhkZXRhY2hlZMOpaGFzaF90eXBlCqNrZXnEIwEgzmmY9erwVGuL4xWuHYQu5mqfKiF+TFEgqo4Fnl9Qn20Kp3BheWxvYWTFAvZ7ImJvZHkiOnsia2V5Ijp7ImVsZGVzdF9raWQiOiIwMTIwY2U2OTk4ZjVlYWYwNTQ2YjhiZTMxNWFlMWQ4NDJlZTY2YTlmMmEyMTdlNGM1MTIwYWE4ZTA1OWU1ZjUwOWY2ZDBhIiwiaG9zdCI6ImtleWJhc2UuaW8iLCJraWQiOiIwMTIwY2U2OTk4ZjVlYWYwNTQ2YjhiZTMxNWFlMWQ4NDJlZTY2YTlmMmEyMTdlNGM1MTIwYWE4ZTA1OWU1ZjUwOWY2ZDBhIiwidWlkIjoiYmYyMjM4MTIyODIxYmJjMzA5ZDViZjFlZDI0MjFkMTkiLCJ1c2VybmFtZSI6InRoYWxsYWRhIn0sInNlcnZpY2UiOnsiaG9zdG5hbWUiOiJ3d3cuaGFsbGFkYS5uZXQiLCJwcm90b2NvbCI6Imh0dHA6In0sInR5cGUiOiJ3ZWJfc2VydmljZV9iaW5kaW5nIiwidmVyc2lvbiI6MX0sImNsaWVudCI6eyJuYW1lIjoia2V5YmFzZS5pbyBnbyBjbGllbnQiLCJ2ZXJzaW9uIjoiMS4wLjE4In0sImN0aW1lIjoxNDg2NTg3NTc5LCJleHBpcmVfaW4iOjUwNDU3NjAwMCwibWVya2xlX3Jvb3QiOnsiY3RpbWUiOjE0ODY1ODc1NjksImhhc2giOiIwOTk0NzM1MTllNTA4NzFiYmUwNWE1N2RhMDllNWJhOGZhNTc1ZThhNTFlMjU5ZGQ2YWQ2ZThlNGVhZDA3ZmRiNjM4OTU0MjQ2NTliYzAxYWVmNjAwZDVjYWEyZGRhZjMyYmQxOTRmZWM2ZTA4YjU0ZjYyM2MwYmYzODFkZjMwYyIsInNlcW5vIjo4NDYxNTJ9LCJwcmV2IjoiN2VjMTMxOWJlODFkZGU4MzYyYjc3N2I2Njk5MDQ5NDRlYWViZDU5MjIwNmMxNzk1M2M2Mjg3NDliNDFjZjk5MiIsInNlcW5vIjo5LCJ0YWciOiJzaWduYXR1cmUifaNzaWfEQIiv6L2Js0MEXpgiHcIhP9B3MmBl+81QA0z32QYT5XWXsH/6rsylYYQCLWjrIXAILrOIoH5Jyd2GFfpU+O8A/w2oc2lnX3R5cGUgpGhhc2iCpHR5cGUIpXZhbHVlxCDXYP2O217NRz/lX6Q6G54D0G6/EIfX2OJY/hduqsrYNqN0YWfNAgKndmVyc2lvbgE=
And finally, I am proving ownership of this host by posting or
appending to this document.
View my publicly-auditable identity here: https://keybase.io/thallada
==================================================================

View File

@ -0,0 +1,3 @@
{
"m.server": "synapse.hallada.net:443"
}

2
CNAME
View File

@ -1 +1 @@
www.hallada.net www.hallada.net

View File

@ -1,2 +1,4 @@
source 'http://rubygems.org' source 'http://rubygems.org'
gem 'github-pages'
gem "webrick"
gem 'github-pages', group: :jekyll_plugins

View File

@ -4,9 +4,21 @@ thallada.github.io
This is the latest version of my personal website. It is a static website built This is the latest version of my personal website. It is a static website built
with [Jekyll](http://jekyllrb.com/). with [Jekyll](http://jekyllrb.com/).
See it at [http://www.hallada.net/](http://www.hallada.net/). See it at [https://www.hallada.net/](https://www.hallada.net/).
## Build Locally
To run a version of the site in development locally. Checkout this repo and
then:
1. `cd thallada.github.io`
2. [Install Jekyll](https://jekyllrb.com/docs/installation/)
3. Run `bundle install`
3. Run `bundle exec jekyll serve`
4. Visit `http://localhost:4000` to view the website
## Magic
##Magic##
Most of the development work of this website went into creating what I like to Most of the development work of this website went into creating what I like to
call "magic", or the dynamic background to my homepage. A few seconds after call "magic", or the dynamic background to my homepage. A few seconds after
loading the page, a branching web of colored tendrils will grow in a random loading the page, a branching web of colored tendrils will grow in a random
@ -29,11 +41,14 @@ random, colorful, and more CPU efficient.
It was really fun to tweak various variables in the script and see how the It was really fun to tweak various variables in the script and see how the
animation reacted. It didn't take much tweaking to get the lines to appear like animation reacted. It didn't take much tweaking to get the lines to appear like
lightning flashing in the distant background, or like cracks splitting the lightning flashing in the distant background, or like cracks splitting the
screen, or like growing forest of sprouting trees. A future project may involve screen, or like growing forest of sprouting trees.
putting the magic up on its own webpage and add UI dials to allow anyone to
change these variables in realtime. You can play around with these variables yourself on the [/magic
page](https://www.hallada.net/magic) which has sliders for tweaking the
animations in realtime.
## Layout & CSS
##Layout & CSS##
I use a [grid system devised by Adam Kaplan](http://www.adamkaplan.me/grid/) and I use a [grid system devised by Adam Kaplan](http://www.adamkaplan.me/grid/) and
with some pieces from [Jorden Lev](http://jordanlev.github.io/grid/). It is with some pieces from [Jorden Lev](http://jordanlev.github.io/grid/). It is
set-up by scratch in my `main.css`. I decided on this so that I would not have set-up by scratch in my `main.css`. I decided on this so that I would not have
@ -105,11 +120,37 @@ desktop, use `hide-desktop` instead.
</div> </div>
``` ```
I had an issue with displaying elements on desktop that had the class
"hide-mobile", so you can add the following classes to make sure they redisplay
in the right display type correctly:
* `hide-mobile-block`
* `hide-mobile-inline-block`
* `hide-mobile-inline`
* `hide-deskop-block`
* `hide-desktop-inline-block`
* `hide-desktop-inline`
I could add more for each `display` property, but I'm trying to find a better
way of fixing this without adding these second classes.
Another note: I use [box-sizing (as suggested by Paul Another note: I use [box-sizing (as suggested by Paul
Irish)](http://www.paulirish.com/2012/box-sizing-border-box-ftw/), which I think Irish)](http://www.paulirish.com/2012/box-sizing-border-box-ftw/), which I think
makes dealing with sizing elements a lot more sane. makes dealing with sizing elements a lot more sane.
##Attributions## ### Light & Dark Themes
In 2020, I created a dark theme for the website. The dark theme is used if it
detects that the user's OS is set to prefer a dark theme [using the
`prefers-color-scheme` `@media`
query](https://css-tricks.com/dark-modes-with-css/).
To allow the user to select a theme separate from their OS's theme, I have also
included [a switch that can toggle between the two
themes](https://github.com/GoogleChromeLabs/dark-mode-toggle).
## Attributions
[Book](http://thenounproject.com/term/book/23611/) designed by [Nherwin [Book](http://thenounproject.com/term/book/23611/) designed by [Nherwin
Ardoña](http://thenounproject.com/nherwinma) from the Noun Project. Ardoña](http://thenounproject.com/nherwinma) from the Noun Project.

View File

@ -1,9 +1,32 @@
title: Tyler Hallada
name: Tyler Hallada - Blog name: Tyler Hallada - Blog
description: Musings on technology, literature, and interesting topics description: Musings on technology, literature, and interesting topics
url: http://hallada.net/blog author: thallada
markdown: redcarpet url: https://www.hallada.net
pygments: true blog_url: https://www.hallada.net/blog
assets: https://hallada.net/assets/
logo: /img/profile_icon_128x128.jpg
social:
name: Tyler Hallada
links:
- https://twitter.com/tyhallada
- https://www.facebook.com/tyhallada
- https://www.linkedin.com/in/thallada/
- https://github.com/thallada
defaults:
-
scope:
path: ""
values:
image: /img/profile_icon_300x200.jpg
markdown: kramdown
kramdown:
syntax_highlighter: rouge
excerpt_separator: "<!--excerpt-->"
paginate: 10 paginate: 10
paginate_path: "blog/page:num" paginate_path: "blog/page:num"
gems: gems:
- jekyll-redirect-from - jekyll-redirect-from
- jekyll-paginate
- jekyll-seo-tag
include: [".well-known"]

3
_data/authors.yml Normal file
View File

@ -0,0 +1,3 @@
thallada:
picture: /img/profile_icon_128x128.jpg
twitter: tyhallada

12
_includes/comments.html Normal file
View File

@ -0,0 +1,12 @@
<div class="card">
<div class="row clearfix">
<div class="column full">
<script data-isso="https://comments.hallada.net/"
src="https://comments.hallada.net/js/embed.min.js"></script>
<section id="isso-thread">
<noscript>Javascript needs to be activated to view comments.</noscript>
</section>
</div>
</div>
</div>

36
_includes/mail-form.html Normal file
View File

@ -0,0 +1,36 @@
<div class="card">
<div class="subscribe-form">
<div class="row clearfix">
<div class="column full">
<h3>Subscribe to my future posts</h3>
</div>
</div>
<form action="https://list.hallada.net/subscribe" method="POST" accept-charset="utf-8">
<div class="row clearfix">
<div class="column half">
<label for="name">Name (optional)</label><br />
<input type="text" name="name" id="name" />
</div>
<div class="column half">
<label for="email">Email</label><br />
<input type="email" name="email" id="email" />
</div>
</div>
<div class="row clearfix">
<div style="display:none;">
<label for="hp">HP</label><br />
<input type="text" name="hp" id="hp" />
</div>
<input type="hidden" name="list" value="Q7JrUBzeCeftZqwDtxsQ9w" />
<div class="column half">
<input type="submit" name="submit" id="submit" value="Submit" />
</div>
<div class="column half">
<span class="form-rss">Or subscribe to my <a href="/feed.xml">RSS feed</a></span>
</div>
</div>
</form>
<div class="row clearfix">
</div>
</div>
</div>

View File

@ -1,5 +1,5 @@
<!DOCTYPE html> <!DOCTYPE html>
<html> <html lang="en">
<head> <head>
<meta charset="utf-8"> <meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
@ -10,54 +10,62 @@
<link rel="stylesheet" href="/css/normalize.css"> <link rel="stylesheet" href="/css/normalize.css">
<!-- Fix IE --> <!-- Fix IE -->
<!--[if lt IE 9]> <!--[if lt IE 9]>
<script src="http://cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7/html5shiv.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7/html5shiv.js"></script>
<script src="http://cdnjs.cloudflare.com/ajax/libs/respond.js/1.4.2/respond.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/respond.js/1.4.2/respond.js"></script>
<![endif]--> <![endif]-->
<!-- syntax highlighting CSS --> <!-- syntax highlighting CSS -->
<link rel="stylesheet" href="/css/syntax.css"> <link rel="stylesheet" href="/css/syntax.css">
<link rel="stylesheet" href="/css/syntax_dark.css" media="(prefers-color-scheme: dark)">
<!-- Custom CSS --> <!-- Custom CSS -->
<link rel="stylesheet" href="/css/main.css"> <link rel="stylesheet" href="/css/main.css">
<link rel="stylesheet" href="/css/main_dark.css" media="(prefers-color-scheme: dark)">
<!-- Web Fonts --> <!-- Web Fonts -->
<link href='http://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,700italic,400,300,700' rel='stylesheet' type='text/css'> <link href="https://fonts.googleapis.com/css?family=Open+Sans:300italic,400italic,700italic,400,300,700" rel="stylesheet" type="text/css">
<!-- RSS Feed --> <!-- RSS Feed -->
<link href='/feed.xml' rel='alternate' type='application/atom+xml'> <link href="/feed.xml" rel="alternate" type="application/atom+xml">
<!-- Favicon --> <!-- Favicon -->
<link rel="shortcut icon" href="/favicon.png" /> <link rel="shortcut icon" href="/favicon.png">
<!-- Google Analytics --> <!-- Cloudflare Web Analytics -->
<script> <script defer src='https://static.cloudflareinsights.com/beacon.min.js' data-cf-beacon='{"token": "54df1dc81a2d4cb7920b456212bbd437"}'></script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ <!-- End Cloudflare Web Analytics -->
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-39880341-1', 'auto'); <script type="module" src="https://unpkg.com/dark-mode-toggle"></script>
ga('send', 'pageview');
</script> {% seo %}
<!-- End Google Analytics -->
</head> </head>
<body> <body>
<div class="container"> <div class="root">
<div class="row clearfix header"> <div class="container">
<h1 class="title"><a href="/blog">{{ site.name }}</a></h1> <div class="row clearfix header">
<a class="extra" href="/">home</a> <h1 class="title"><a href="/blog/">{{ site.name }}</a></h1>
</div> <a class="extra" href="/">home</a>
{{ content }}
<div class="row clearfix rss">
<a class="rss" href="/feed.xml"><img src="/img/rss.png" alt="RSS"/></a>
</div>
<div class="row clearfix footer">
<div class="column full contact">
<p class="contact-info">
<a href="mailto:tyler@hallada.net">tyler@hallada.net</a>
</p>
</div> </div>
{{ content }}
<div class="row clearfix rss">
<a class="rss" href="/feed.xml"><img src="/img/rss.png" alt="RSS" class="icon" /></a>
</div>
<div class="row clearfix footer">
<div class="column full contact">
<p class="contact-info">
<a href="mailto:tyler@hallada.net">tyler@hallada.net</a>
</p>
</div>
</div>
</div>
<div class="theme-toggle">
<dark-mode-toggle
id="dark-mode-toggle-1"
appearance="toggle"
light="Dark"
dark="Light"
permanent
></dark-mode-toggle>
</div> </div>
</div> </div>
</body> </body>

View File

@ -10,49 +10,57 @@
<link rel="stylesheet" href="/css/normalize.css"> <link rel="stylesheet" href="/css/normalize.css">
<!-- Fix IE --> <!-- Fix IE -->
<!--[if lt IE 9]> <!--[if lt IE 9]>
<script src="http://cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7/html5shiv.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7/html5shiv.js"></script>
<script src="http://cdnjs.cloudflare.com/ajax/libs/respond.js/1.4.2/respond.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/respond.js/1.4.2/respond.js"></script>
<![endif]--> <![endif]-->
<!-- syntax highlighting CSS --> <!-- syntax highlighting CSS -->
<link rel="stylesheet" href="/css/syntax.css"> <link rel="stylesheet" href="/css/syntax.css">
<link rel="stylesheet" href="/css/syntax_dark.css" media="(prefers-color-scheme: dark)">
<!-- Custom CSS --> <!-- Custom CSS -->
<link rel="stylesheet" href="/css/main.css"> <link rel="stylesheet" href="/css/main.css">
<link rel="stylesheet" href="/css/main_dark.css" media="(prefers-color-scheme: dark)">
<!-- RSS Feed --> <!-- RSS Feed -->
<link href='/feed.xml' rel='alternate' type='application/atom+xml'> <link href="/feed.xml" rel="alternate" type="application/atom+xml">
<!-- Web Fonts --> <!-- Web Fonts -->
<link href='http://fonts.googleapis.com/css?family=Open+Sans:400,400italics,300,300italics,200' rel='stylesheet' type='text/css'> <link href="https://fonts.googleapis.com/css?family=Open+Sans:400,400italics,300,300italics,200" rel="stylesheet" type="text/css">
<!-- Icon Fonts --> <!-- Icon Fonts -->
<link rel="stylesheet" href="/css/ionicons.min.css"> <link rel="stylesheet" href="/css/ionicons.min.css">
<!-- Favicon --> <!-- Favicon -->
<link rel="shortcut icon" href="/favicon.png" /> <link rel="shortcut icon" href="/favicon.png">
<!-- Scripts --> <!-- Scripts -->
<script src="/js/AnimationFrame.min.js"></script> <script src="/js/AnimationFrame.min.js"></script>
<script async src="/js/magic.js"></script> <script async src="/js/magic.js"></script>
<!-- Google Analytics --> <!-- Cloudflare Web Analytics -->
<script> <script defer src='https://static.cloudflareinsights.com/beacon.min.js' data-cf-beacon='{"token": "54df1dc81a2d4cb7920b456212bbd437"}'></script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ <!-- End Cloudflare Web Analytics -->
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-39880341-1', 'auto'); <script type="module" src="https://unpkg.com/dark-mode-toggle"></script>
ga('send', 'pageview');
</script> {% seo %}
<!-- End Google Analytics -->
</head> </head>
<body> <body>
<canvas id="magic"></canvas> <div class="root">
<div class="container"> <canvas id="magic"></canvas>
{{ content }} <div class="container">
{{ content }}
</div>
<div class="theme-toggle">
<dark-mode-toggle
id="dark-mode-toggle-1"
appearance="toggle"
light="Dark"
dark="Light"
permanent
></dark-mode-toggle>
</div>
</div> </div>
</body> </body>
</html> </html>

View File

@ -1,18 +1,20 @@
--- ---
layout: default layout: default
--- ---
<div class="card"> <div class="card">
<div class="row clearfix post-header"> <div class="row clearfix">
<div class="column three-fourths"> <div class="column full post-header">
<a href="{{ post.url }}"><h2 class="post-title">{{ page.title }}</h2></a> <h2 class="post-title"><a href="{{ post.url }}">{{ page.title }}</a></h2>
</div> <span class="timestamp">{{ page.date | date_to_string }}</span>
<div class="column fourth">
<span class="timestamp">{{ page.date | date_to_string }}</span>
</div> </div>
</div> </div>
<div class="row clearfix"> <div class="row clearfix">
<div class="column full post"> <div class="column full post">{{ content }}</div>
{{ content }}
</div>
</div> </div>
</div> </div>
{% include comments.html %}
<!-- disabling until I fix the mail form -->
<!-- {% include mail-form.html %} -->

View File

@ -13,6 +13,7 @@ and was pretty familiar with it and I was beginning to get familiar with
was what I was working with at [Valti](https://www.valti.com), and I was really was what I was working with at [Valti](https://www.valti.com), and I was really
liking making websites with it. It took what made Python awesome and applied liking making websites with it. It took what made Python awesome and applied
that to web development. that to web development.
<!--excerpt-->
I started from a blank Django project and built it up from there. Django's I started from a blank Django project and built it up from there. Django's
Object-Relational Mapper (ORM) can be boiled down to this: python classes Object-Relational Mapper (ORM) can be boiled down to this: python classes

View File

@ -11,6 +11,7 @@ you can tell, there hasn't been any posts since my first ["Hello,
World!"](/2012/12/03/hello-world) post. Sure, I've been working on projects, but I World!"](/2012/12/03/hello-world) post. Sure, I've been working on projects, but I
just haven't gotten to the point in any of those projects where I felt like I just haven't gotten to the point in any of those projects where I felt like I
could blog in detail about it. could blog in detail about it.
<!--excerpt-->
Then I watched this great talk that [Brian Then I watched this great talk that [Brian
Jones](http://pyvideo.org/speaker/352/brian-k-jones) gave at Jones](http://pyvideo.org/speaker/352/brian-k-jones) gave at
@ -18,7 +19,7 @@ Jones](http://pyvideo.org/speaker/352/brian-k-jones) gave at
pointed out to me: pointed out to me:
<div class="video-container"><iframe width="640" height="360" <div class="video-container"><iframe width="640" height="360"
src="http://www.youtube.com/embed/BBfW3m3TK0w?feature=player_embedded" src="https://www.youtube.com/embed/BBfW3m3TK0w?feature=player_embedded"
frameborder="0" allowfullscreen></iframe></div> frameborder="0" allowfullscreen></iframe></div>
One point that he makes that really resonates with me is how I should write One point that he makes that really resonates with me is how I should write

View File

@ -11,6 +11,7 @@ keeps track of the status of every machine and displays it on a
[website](http://gmu.esuds.net/) so students can check how full the machines [website](http://gmu.esuds.net/) so students can check how full the machines
are before making the trek down to the laundry rooms. The system emails each are before making the trek down to the laundry rooms. The system emails each
student when their laundry is finished as well. student when their laundry is finished as well.
<!--excerpt-->
The only problem is that their user interface is pretty atrocious. I wrote up a The only problem is that their user interface is pretty atrocious. I wrote up a
[usability analysis](https://gist.github.com/thallada/5351114) of the site for [usability analysis](https://gist.github.com/thallada/5351114) of the site for
@ -26,7 +27,7 @@ which made it easy for me to dive right in. I'll probably try out
[d3js](http://d3js.org/) for my next visualization project though, it looks a [d3js](http://d3js.org/) for my next visualization project though, it looks a
whole lot more advanced. whole lot more advanced.
###Current laundry usage charts### ### Current laundry usage charts
I created an [app](/laundry) in [Django](https://www.djangoproject.com/) to I created an [app](/laundry) in [Django](https://www.djangoproject.com/) to
display current laundry machine usage charts for all of the laundry rooms on display current laundry machine usage charts for all of the laundry rooms on
@ -47,7 +48,7 @@ folder).
The point was to make this as dead simple and easy to use as possible. Do you The point was to make this as dead simple and easy to use as possible. Do you
think I succeeded? think I succeeded?
###Weekly laundry usage chart### ### Weekly laundry usage chart
Knowing the *current* laundry machine usage is nice for saving a wasted trip Knowing the *current* laundry machine usage is nice for saving a wasted trip
down to the laundry room, but what if you wanted to plan ahead and do your down to the laundry room, but what if you wanted to plan ahead and do your

View File

@ -12,6 +12,7 @@ streamed to the user's Flash player (in their browser) bit-by-bit, the full
video file is never given to the user for them to keep. This is desirable to a video file is never given to the user for them to keep. This is desirable to a
lot of media companies because then they can force you to watch through ads to lot of media companies because then they can force you to watch through ads to
see their content and can charge you to download the full video. see their content and can charge you to download the full video.
<!--excerpt-->
However, [RTMPDump](http://rtmpdump.mplayerhq.hu/), an open-source tool However, [RTMPDump](http://rtmpdump.mplayerhq.hu/), an open-source tool
designed to intercept RTMP streams, can download the full video. designed to intercept RTMP streams, can download the full video.
@ -27,7 +28,7 @@ Since this is questionably legal, make sure you understand any Terms of
Services you accepted or laws in your locality regarding this before you follow Services you accepted or laws in your locality regarding this before you follow
the steps below ;). the steps below ;).
###Have Linux### ### Have Linux
Most of these instructions will assume you have Ubuntu, but Most of these instructions will assume you have Ubuntu, but
most distributions will work. most distributions will work.
@ -36,18 +37,18 @@ While RTMPDump works on a variety of operating systems, I've only researched
how to do this on Linux. Feel free to comment if you know how to do this in how to do this on Linux. Feel free to comment if you know how to do this in
Windows or OSX. Windows or OSX.
###Install RTMPDump### ### Install RTMPDump
This open source goodness can be found at This open source goodness can be found at
[http://rtmpdump.mplayerhq.hu/](http://rtmpdump.mplayerhq.hu/) or you can just [http://rtmpdump.mplayerhq.hu/](http://rtmpdump.mplayerhq.hu/) or you can just
intall it using your Linux distro's package manager. For Ubuntu, that would be intall it using your Linux distro's package manager. For Ubuntu, that would be
typing the following into your terminal: typing the following into your terminal:
```bash ~~~ bash
sudo apt-get install rtmpdump sudo apt-get install rtmpdump
``` ~~~
###Redirect ALL the RTMP!### ### Redirect ALL the RTMP!
Now we need to configure your firewall to redirect Now we need to configure your firewall to redirect
all RTMP traffic to a local port on your computer (Note: this will screw up any all RTMP traffic to a local port on your computer (Note: this will screw up any
@ -55,11 +56,11 @@ RTMP streaming video you try to watch on your computer, so make sure you run
the undo command in one of the later steps to return things to normal). Type the undo command in one of the later steps to return things to normal). Type
the following into your terminal, there should be no output from the command: the following into your terminal, there should be no output from the command:
```bash ~~~ bash
sudo iptables -t nat -A OUTPUT -p tcp --dport 1935 -j REDIRECT sudo iptables -t nat -A OUTPUT -p tcp --dport 1935 -j REDIRECT
``` ~~~
###Run rtmpsrv### ### Run rtmpsrv
When you install `rtmpdump`, a program called `rtmpsrv` When you install `rtmpdump`, a program called `rtmpsrv`
should have been bundled with it and installed as well. We will want to run should have been bundled with it and installed as well. We will want to run
@ -73,7 +74,7 @@ This should output something that looks like this:
Streaming on rtmp://0.0.0.0:1935 Streaming on rtmp://0.0.0.0:1935
###Feed rtmpsrv the Precious Video### ### Feed rtmpsrv the Precious Video
Now go to your browser and open/refresh Now go to your browser and open/refresh
the page with the desired video. Try playing the video. If nothing happens and the page with the desired video. Try playing the video. If nothing happens and
@ -87,24 +88,24 @@ will need it later.
You can CTRL+C out of rtmpsrv now that we have what we need. You can CTRL+C out of rtmpsrv now that we have what we need.
###Undo the Redirection### ### Undo the Redirection
You must undo the iptables redirection command we You must undo the iptables redirection command we
performed earlier before you can do anything else, so run this in your performed earlier before you can do anything else, so run this in your
terminal: terminal:
```bash ~~~ bash
sudo iptables -t nat -D OUTPUT -p tcp --dport 1935 -j REDIRECT sudo iptables -t nat -D OUTPUT -p tcp --dport 1935 -j REDIRECT
``` ~~~
###Finally, Download the Precious Video### ### Finally, Download the Precious Video
Now paste that command you copied Now paste that command you copied
from the rtmpsrv output in the step before last into your terminal prompt and from the rtmpsrv output in the step before last into your terminal prompt and
hit enter. You should now see a torrent of `INFO` printout along with a hit enter. You should now see a torrent of `INFO` printout along with a
percentage as the video is being downloaded. percentage as the video is being downloaded.
###Feast Eyes on Precious Video### ### Feast Eyes on Precious Video
Once downloaded, the video file, which has a Once downloaded, the video file, which has a
`flv` extension and was named by the `-o` parameter in the command you copied `flv` extension and was named by the `-o` parameter in the command you copied

View File

@ -11,6 +11,7 @@ met since the past two internships I've had at [Valti](https:/www.valti.com/)
and [Humbug](https://humbughq.com/) in Cambridge, Massachusetts. Seeing as it and [Humbug](https://humbughq.com/) in Cambridge, Massachusetts. Seeing as it
encapsulated what I've learned culturally since then, I decided to post it here encapsulated what I've learned culturally since then, I decided to post it here
as well.* as well.*
<!--excerpt-->
Hackers -- not your malicious meddling Hollywood-style speed-typists -- but the Hackers -- not your malicious meddling Hollywood-style speed-typists -- but the
type who sees a toaster and turns it into a computer capable of etching emails type who sees a toaster and turns it into a computer capable of etching emails

View File

@ -12,6 +12,7 @@ to create a homepage for the University's bookstore website, applying all of the
usability principles we had learned over the semester. I ended up working on it usability principles we had learned over the semester. I ended up working on it
when I wanted to procrastinate on assignments in my other classes, so I put when I wanted to procrastinate on assignments in my other classes, so I put
quite a bit of effort into it. quite a bit of effort into it.
<!--excerpt-->
See it here: [swe205.hallada.net](http://swe205.hallada.net) See it here: [swe205.hallada.net](http://swe205.hallada.net)
<div style="text-align: center"> <div style="text-align: center">

View File

@ -10,6 +10,7 @@ includes redditing. I probably spend far too much time on
way to view reddit through the command-line. [w3m](http://w3m.sourceforge.net/) way to view reddit through the command-line. [w3m](http://w3m.sourceforge.net/)
could render reddit okay, but I couldn't view my personal front-page because could render reddit okay, but I couldn't view my personal front-page because
that required me to login to my profile. that required me to login to my profile.
<!--excerpt-->
The solution was [cortex](http://cortex.glacicle.org/), a CLI app for viewing The solution was [cortex](http://cortex.glacicle.org/), a CLI app for viewing
reddit. reddit.
@ -17,19 +18,19 @@ reddit.
However, I kind of got tired of viewing reddit through w3m, the header alone is However, I kind of got tired of viewing reddit through w3m, the header alone is
a few pages long to scroll through, and the CSS for the comments doesn't load so a few pages long to scroll through, and the CSS for the comments doesn't load so
there isn't any sense of threading. But, then I discovered reddit's mobile there isn't any sense of threading. But, then I discovered reddit's mobile
website: [http://m.reddit.com](http://m.reddit.com), and it looks absolutely website: [http://reddit.com/.mobile](http://reddit.com/.mobile), and it looks absolutely
beautiful in w3m. In fact, I think I prefer it to the normal website in any beautiful in w3m. In fact, I think I prefer it to the normal website in any
modern browser; there are no distractions, just pure content. modern browser; there are no distractions, just pure content.
<a href="/img/blog/w3m_mobile_reddit.png"><img src="/img/blog/w3m_mobile_reddit.png" alt="m.reddit.com rendered in w3m"></a> <a href="/img/blog/w3m_mobile_reddit.png"><img src="/img/blog/w3m_mobile_reddit.png" alt="m.reddit.com rendered in w3m"></a>
In order to get cortex to open the mobile version of reddit, I made a bash In order to get cortex to open the mobile version of reddit, I made a bash
script wrapper around w3m that takes urls and replaces `"http://reddit.com"` and script wrapper around w3m that takes urls and appends `".mobile"` to the end of
`"http://www.reddit.com"` with `"http://m.reddit.com"` before passing them to reddit urls before passing them to w3m (as well as fixing a double forward slash
w3m (as well as fixing a double forward slash error in the comment uri cortex error in the comment uri cortex outputs that desktop reddit accepts but mobile
outputs that desktop reddit accepts but mobile reddit 404s on). The script: reddit 404s on). The script:
```bash ~~~ bash
#!/bin/bash #!/bin/bash
args=() args=()
@ -45,8 +46,17 @@ done
args+=("$@") args+=("$@")
for arg in "${args[@]}" ; do for arg in "${args[@]}" ; do
# Switch to mobile reddit # Switch to mobile reddit
url=${arg/http:\/\/reddit.com/http:\/\/m.reddit.com} url=$arg
url=${url/http:\/\/www.reddit.com/http:\/\/m.reddit.com} mobile='.mobile'
if [[ $url =~ http:\/\/www.reddit.com || $url =~ http:\/\/reddit.com ]]
then
if [[ $url =~ \/$ ]]
then
url=$url$mobile
else
url=$url'/'$mobile
fi
fi
# Fix double backslash error in comment uri for mobile reddit # Fix double backslash error in comment uri for mobile reddit
url=${url/\/\/comments/\/comments} url=${url/\/\/comments/\/comments}
if [[ $t == "1" ]]; then if [[ $t == "1" ]]; then
@ -55,7 +65,7 @@ for arg in "${args[@]}" ; do
w3m "${url}" w3m "${url}"
fi fi
done done
``` ~~~
Since I regurally use [Tmux](http://tmux.sourceforge.net/) (with Since I regurally use [Tmux](http://tmux.sourceforge.net/) (with
[Byobu](http://byobu.co/)), I also added an optional `-t`/`--tmux` switch that [Byobu](http://byobu.co/)), I also added an optional `-t`/`--tmux` switch that
@ -64,10 +74,10 @@ will open w3m in a temporary new tmux window that will close when w3m is closed.
I saved the script as `w3m-reddit` and made it an executable command. In Ubuntu I saved the script as `w3m-reddit` and made it an executable command. In Ubuntu
that's done with the following commands: that's done with the following commands:
```bash ~~~ bash
$ sudo mv w3m-reddit /usr/bin/ $ sudo mv w3m-reddit /usr/bin/
$ sudo chmod +x /usr/bin/w3m-reddit $ sudo chmod +x /usr/bin/w3m-reddit
``` ~~~
Now cortex needs to be configured to use `w3m-reddit`, and that's done by Now cortex needs to be configured to use `w3m-reddit`, and that's done by
setting `browser-command` in the cortex config at `~/.cortex/config` to setting `browser-command` in the cortex config at `~/.cortex/config` to
@ -93,3 +103,9 @@ scrapping the whole thing and starting over in Python instead.
Stay tuned for more posts on how I view images and videos efficiently from the Stay tuned for more posts on how I view images and videos efficiently from the
command-line. command-line.
EDIT 04/25/2015: Reddit seems to have gotten rid of their old mobile reddit site
and replaced it with a more modern version that unfortunately doesn't look as
good in w3m. However, the old mobile site is still accessable by adding a
".mobile" to the end of urls. The script above has been edited to reflect this
change.

View File

@ -17,6 +17,7 @@ customizability and compatibility with other programs. There's nothing more
powerful than being able to whip up a small python or bash script that interacts powerful than being able to whip up a small python or bash script that interacts
with a couple of other programs to achieve something instantly that optimizes my with a couple of other programs to achieve something instantly that optimizes my
work flow. work flow.
<!--excerpt-->
I use the [Awesome](http://awesome.naquadah.org/) window manager, which works I use the [Awesome](http://awesome.naquadah.org/) window manager, which works
great for tiling up terminal windows right up next to browser windows. However, great for tiling up terminal windows right up next to browser windows. However,
@ -53,7 +54,7 @@ This is how I got it setup (on any Ubuntu machine with sudo privileges):
Save the following python file in `/usr/bin/` as `search-pane` (no extension): Save the following python file in `/usr/bin/` as `search-pane` (no extension):
```python ~~~ python
#!/usr/bin/python #!/usr/bin/python
from subprocess import call, check_output from subprocess import call, check_output
from threading import Thread from threading import Thread
@ -106,27 +107,27 @@ except Exception, errtxt:
print errtxt print errtxt
call(['w3m', url]) # pass url off to w3m call(['w3m', url]) # pass url off to w3m
``` ~~~
Make the directory and file for search history: Make the directory and file for search history:
```bash ~~~ bash
mkdir ~/.search-pane mkdir ~/.search-pane
touch ~/.search-pane/history touch ~/.search-pane/history
``` ~~~
Allow anyone to execute the python script (make it into a program): Allow anyone to execute the python script (make it into a program):
```bash ~~~ bash
chmod a+x /usr/bin/search-pane chmod a+x /usr/bin/search-pane
``` ~~~
To get quick access to the program from the command-line edit `~/.bashrc` to To get quick access to the program from the command-line edit `~/.bashrc` to
add: add:
```bash ~~~ bash
alias s='search-pane' alias s='search-pane'
``` ~~~
To add byobu key bindings edit `~/.byobu/keybindings.tmux` (or `/usr/share/byobu/keybindings/f-keys.tmux`): To add byobu key bindings edit `~/.byobu/keybindings.tmux` (or `/usr/share/byobu/keybindings/f-keys.tmux`):

View File

@ -9,6 +9,7 @@ just glorified browsers, right? What if I wanted to do anything outside of the
browser? Why would you spend [$1299 or $1449 for a browser? Why would you spend [$1299 or $1449 for a
computer](https://www.google.com/intl/en/chrome/devices/chromebooks.html#pixel) computer](https://www.google.com/intl/en/chrome/devices/chromebooks.html#pixel)
that can only run a browser? that can only run a browser?
<!--excerpt-->
While I know a lot of people who buy expensive MacBooks only to just use a web While I know a lot of people who buy expensive MacBooks only to just use a web
browser and iTunes, Im a bit more of a power user and I need things like browser and iTunes, Im a bit more of a power user and I need things like
@ -82,7 +83,7 @@ of tweaking. If anyone has read my past posts, they know that I am obsessed
with configuring things. Here is what I came up with for everything I would with configuring things. Here is what I came up with for everything I would
ever need to do on my Chromebook: ever need to do on my Chromebook:
###Writing### ### Writing
I spent a lot of time downloading I spent a lot of time downloading
[various](https://chrome.google.com/webstore/detail/write-space/aimodnlfiikjjnmdchihablmkdeobhad) [various](https://chrome.google.com/webstore/detail/write-space/aimodnlfiikjjnmdchihablmkdeobhad)
@ -115,7 +116,7 @@ hassle though, so I often just stick to the default style. Its a sign that I
am procrastinating if Im trying to look for the “perfect template” to write in am procrastinating if Im trying to look for the “perfect template” to write in
anyways. anyways.
###Programming### ### Programming
Ive gotten so used to [vim](http://www.vim.org/) in a Linux Ive gotten so used to [vim](http://www.vim.org/) in a Linux
terminal that I dont think I could ever use any other editor. There are a few terminal that I dont think I could ever use any other editor. There are a few
@ -150,7 +151,7 @@ have all of the great chrome apps and extensions right at my fingertips.
Especially when some apps can be opened up in small panels in the corner of the Especially when some apps can be opened up in small panels in the corner of the
screen temporarily. screen temporarily.
###Panels### ### Panels
Chrome recently released a new concept for opening new windows Chrome recently released a new concept for opening new windows
called “Panels”, and once I discovered them I couldnt get enough of them. The called “Panels”, and once I discovered them I couldnt get enough of them. The
@ -187,7 +188,7 @@ Panel](https://chrome.google.com/webstore/detail/improved-google-tasks-pan/kgnap
Im still lacking Facebook Messenger and Google Voice panel view apps, so I Im still lacking Facebook Messenger and Google Voice panel view apps, so I
might try my hand at creating one myself soon. might try my hand at creating one myself soon.
###Web Browsing### ### Web Browsing
And, of course, being a laptop dedicated to chrome, it And, of course, being a laptop dedicated to chrome, it
obviously has a great web browsing experience. obviously has a great web browsing experience.

View File

@ -13,6 +13,7 @@ features, one of its best being a version control system that allows you to
send a draft to other people and accept or reject any changes they suggest. It send a draft to other people and accept or reject any changes they suggest. It
also has a minamilistic iA Writer type interface, which focuses on the actual also has a minamilistic iA Writer type interface, which focuses on the actual
writing and nothing more. writing and nothing more.
<!--excerpt-->
One of my most favorite features that I have just discovered, though, is that One of my most favorite features that I have just discovered, though, is that
it allows publishing any Draft document to any arbitrary it allows publishing any Draft document to any arbitrary

View File

@ -9,6 +9,7 @@ development knowledge had exceeded what it was showing off. The main thing that
annoyed me about my last website was that I was hosting what essentially was a annoyed me about my last website was that I was hosting what essentially was a
static website on a web framework meant for dynamic websites. It was time for a static website on a web framework meant for dynamic websites. It was time for a
update. update.
<!--excerpt-->
I decided to go with [Jekyll](http://jekyllrb.com/) which had everything I I decided to go with [Jekyll](http://jekyllrb.com/) which had everything I
wanted: wanted:
@ -47,5 +48,5 @@ under 20% (on my machine), which was actually better than a few chrome
extensions I was running anyways. extensions I was running anyways.
Hopefully this new blog will also inspire me to write more posts as [my last Hopefully this new blog will also inspire me to write more posts as [my last
post](http://thallada.github.io/2013/10/03/publishing-draft-docs-to-my-blog.html) post](/2013/10/03/publishing-draft-docs-to-my-blog.html)
was almost a year ago now. was almost a year ago now.

View File

@ -0,0 +1,268 @@
---
title: Midnight Desktop
layout: post
---
I tend to use Linux (Ubuntu) on my desktop late at night in a dark room. To
protect my eyes from the blinding light of my monitors I've tooled my desktop
environment over the course of a few months to be as dark as possible. It has
gotten complex enough that I thought it would be worth sharing now.
<!--excerpt-->
### dotfiles
Before I begin, I want to note that all the configuration for the setup I'm
describing is stored in a [dotfiles repo on my github
profile](https://github.com/thallada/dotfiles). If you would like to replicate
any of this setup, I would go there. Just note that I will probably be updating
the master branch fairly often, but the
[midnight](https://github.com/thallada/dotfiles/tree/midnight) branch will
always contain the setup described here.
### bspwm
Inspired by [/r/unixporn](http://www.reddit.com/r/unixporn), I decided to switch
from gnome to bspwm, a minimal tiling window manager that positions windows like
leaves on a binary tree.
I don't really use the tiling features that often, though. I often do most of my
work in the terminal and [tmux](http://tmux.sourceforge.net/) does the terminal
pane management. But, when I do open another application, it's nice that bspwm
forces it to use the maximum available space.
I also like how hackable the whole manager is. There is a terminal command
`bspc` that controls the entire desktop environment and a separate program
`sxhkd` (probably the hardest program name ever to remember) handles all of the
hotkeys for the environment. All of them are stored in a
[`sxhkdrc`](https://github.com/thallada/dotfiles/blob/master/sxhkd/.config/sxhkd/sxhkdrc)
under the home directory and it's super easy to add my own. The hotkeys make
this superior to gnome for me because I never have to touch my mouse to move
around the desktop.
### gnome and gtk
I still love some of the features from gnome. Especially the text hinting, which
is why I still run `gnome-settings-daemon` in my [bspwm startup
script](https://github.com/thallada/dotfiles/blob/master/bspwm/bin/bspwm-session).
To make gtk applications universally dark (and also to tune text hinting)
install the `gnome-tweak-tool`. There should be a "Global Dark Theme" setting
under the "Appearance" tab that can be enabled. I use the
[Numix](https://numixproject.org/) gtk theme which seems to behave fine with
this setting.
### Gnome Terminal
I've tried using a few other lighter-weight terminals like xterm, but I still
like the features of gnome-terminal more. I created a "bspwm" profile and set
the background to be transparent with opacity at about half-way on the slider.
My background, set in the [bspwm startup
script](https://github.com/thallada/dotfiles/blob/master/bspwm/bin/bspwm-session)
is a subtle [dark tiling pattern](https://www.toptal.com/designers/subtlepatterns/mosaic/)
so this effectively makes the background of the terminal dark.
In my
[sxhkdrc](https://github.com/thallada/dotfiles/blob/master/sxhkd/.config/sxhkd/sxhkdrc)
I can then map my hotkeys for starting a new terminal to the command
`gnome-terminal --window-with-profile=bspwm`.
### vim
Making vim dark is pretty easy. Just put this in the
[`.vimrc`](https://github.com/thallada/dotfiles/blob/master/vim/.vimrc):
~~~ vim
set background=dark
~~~
I use the colorscheme
[distinguished](https://github.com/Lokaltog/vim-distinguished) which is
installed by putting the `distinguished.vim` file under
[`~/.vim/colors/`](https://github.com/thallada/dotfiles/tree/master/vim/.vim/colors)
and adding this to the `.vimrc`:
~~~ vim
colorscheme distinguished
~~~
### tmux/byobu
I like the abstraction that [byobu](http://byobu.co/) puts ontop of tmux, so
that's what I use in the terminal. Colors can be configured by editing the
[`~/.byobu/color.tmux`](https://github.com/thallada/dotfiles/blob/master/byobu/.byobu/color.tmux)
file. This is what I have in mine:
BYOBU_DARK="\#333333"
BYOBU_LIGHT="\#EEEEEE"
BYOBU_ACCENT="\#4D2100"
BYOBU_HIGHLIGHT="\#303030"
MONOCHROME=0
### evince
I tell my browser, firefox, to open pdfs in evince (aka. Document Viewer)
because evince can darken pdfs.
Select View > Invert Colors and then Edit > Save Current Settings as Default and
now most pdfs will be displayed as white text on black background.
### gimp
Gimp allows you to change themes easily. [Gimp GTK2 Photoshop CS6
Theme](http://gnome-look.org/content/show.php?content=160952) is my favorite
dark theme. Put that in `~/.gimp-2.8/themes/` (or whichever gimp version is
installed) and, in Gimp, change the theme at Edit > Preferences > Theme.
### Firefox
I had to hack firefox a lot to get it to be universally dark since the web
(unfortunately!) doesn't have a night mode switch. I'm using firefox instead of
chrome because firefox has better customization for doing something this
extreme.
#### Userstyles
Firefox has a really neat addon called
[Stylish](https://addons.mozilla.org/en-us/firefox/addon/stylish/) that allows
you to install and edit user CSS files to change the style of any website you
visit. A lot of popular websites have dark themes on
[userstyles.org](https://userstyles.org/), but the rest of the internet still
mostly has a white background by default.
Luckily there's a few global dark themes. [Midnight Surfing
Alternative](https://userstyles.org/styles/47391/midnight-surfing-alternative)
seemed to work the best for me.
However, since the theme is global, it overwrites the custom tailored dark
themes that I had installed for specific popular sites (listed below) making the
sites ugly. The Midnight Surfing Alternative theme can be edited through the
Stylish extension to exclude the websites that I already have dark themes for.
[This superuser question explains what to
edit](http://superuser.com/questions/463153/disable-stylish-on-certain-sites-in-firefox).
Now, whenever I add a new dark theme to Stylish, I edit the regex to add the
domains it covers to the parenthesized list that is delimited by pipes.
~~~ css
@-moz-document regexp("(https?|liberator|file)://(?!([^.]+\\.)?(maps\\.google\\.com|...other domains....)[/:]).*"){
~~~
Here is the list of dark themes I'm currently using with Stylish in addition to
Midnight Surfing Alternative:
* [Amazon Dark -
VisualPlastik](https://userstyles.org/styles/52294/amazon-dark-visualplastik)
* [Dark Feedly
(Hauschild's)](https://userstyles.org/styles/89622/dark-feedly-hauschild-s)
* [Dark Gmail mod by
Karsonito](https://userstyles.org/styles/107544/dark-gmail-mod-by-karsonito)
(this one is a bit buggy right now, though)
* [Dark Netflix
[GRiMiNTENT]](https://userstyles.org/styles/102627/dark-netflix-grimintent)
* [dark-facebook 2 [a dark facebook
theme]](https://userstyles.org/styles/95359/facebook-dark-facebook-2-a-dark-facebook-theme)
* [Forecast.io - hide
map](https://userstyles.org/styles/104812/forecast-io-hide-map)
* [GitHub Dark](https://userstyles.org/styles/37035/github-dark) (this one is
really well done, I love it)
* [Google Play (Music) Dark \*Updated
5-15\*](https://userstyles.org/styles/107643/google-play-music-dark-updated-5-15)
* [Messenger.com Dark](https://userstyles.org/styles/112722/messenger-com-dark)
* [Telegram web dark / custom
color](https://userstyles.org/styles/109612/telegram-web-dark-custom-color)
* [Youtube - Lights Out - A Dark Youtube
Theme](https://userstyles.org/styles/92164/youtube-lights-out-a-dark-youtube-theme)
#### UI Themes
Most of my firefox UI is styled dark with the [FT
DeepDark](https://addons.mozilla.org/en-US/firefox/addon/ft-deepdark/) theme.
The firefox developer tools can be [themed dark in its
settings](http://soledadpenades.com/2014/11/20/using-the-firefox-developer-edition-dark-theme-with-nightly/).
#### Addons
For reddit, I use the [RES](http://redditenhancementsuite.com/) addon which has
a night mode option.
I also use [Custom New
Tab](https://addons.mozilla.org/en-US/firefox/addon/custom-new-tab/) combined
with [homepage.py](https://github.com/ok100/homepage.py) to display a list of my
favorite websites when I start a new tab.
[Vimperator](https://addons.mozilla.org/en-US/firefox/addon/vimperator/) allows
me to control firefox completely with my keyboard which is really useful when I
am switching back and forth between firefox and vim. By default, the vimperator
window has a white background, so I had to [set it to a dark
theme](https://github.com/vimpr/vimperator-colors). Also, in order to make all
of the vimperator help pages dark, I had to add the protocol `liberator://` to
the regex for Midnight Surfing Alternative (exact syntax for that above).
### Redshift
At night, it's also useful to filter out blue light to help with sleep.
[Redshift](http://jonls.dk/redshift/) is a utility that does this automatically
while running in the background.
![Midnight in action with redshift](/assets/midnight_screenshot_redshift.png)
### Invert it all!
I noticed that with the dark colors and my monitor brightness turned low, it was
hard to see the screen during the day because of glares. An easy solution to
this is to simply invert the colors on the output of the monitor into an instant
day theme.
I would have used the command `xcalib -a -i` but that would only work on one
monitor and I have two. Luckily, someone made a utility that would invert colors
on more than one monitor called
[xrandr-invert-colors](https://github.com/zoltanp/xrandr-invert-colors).
The only problem was that this utility seemed to interfere with redshift, so I
made [a script that would disable redshift before
inverting](https://github.com/thallada/dotfiles/blob/master/invert/bin/invert).
~~~ bash
#!/bin/bash
inverted=$(xcalib -a -p | head -c 1)
if [ "$inverted" == "W" ]
then
if [ -z "$(pgrep redshift)" ]
then
xrandr-invert-colors
redshift &
fi
else
if [ -z "$(pgrep redshift)" ]
then
xrandr-invert-colors
else
killall redshift
sleep 3
xrandr-invert-colors
fi
fi
~~~
And, now I have [a
shortcut](https://github.com/thallada/dotfiles/commit/e5153a90fa7c89a0e2ca16e5943f0fa20d4a9512)
to invert the screen.
However, images and videos look pretty ugly inverted. VLC has a setting under
Tools > Effects and Filters > Video Effects > Colors called Negate colors that
can fix that.
For firefox, I made a global userstyle to invert images and videos.
~~~ css
@-moz-document regexp("(https?|liberator|file)://(?!([^.]+\\.)?[/:]).*"){
img, video, div.html5-video-container, div.player-api, span.emoji, i.emoji, span.emoticon, object[type="application/x-shockwave-flash"], embed[type="application/x-shockwave-flash"] {
filter: invert(100%);
}
}
~~~
Whenever I invert the colors, I enable that global theme on firefox.
![Midnight inverted into a day theme](/assets/midnight_screenshot_inverted.png)

View File

@ -0,0 +1,276 @@
---
title: Generating Realistic Satellite Imagery with Deep Neural Networks
layout: post
---
I've been doing a lot of experimenting with [neural-style](https://github.com/jcjohnson/neural-style)
the last month. I think I've discovered a few exciting applications of the
technique that I haven't seen anyone else do yet. The true power of this
algorithm really shines when you can see concrete examples.
<!--excerpt-->
Skip to the **Applications** part of this post to see the outputs from my
experimentation if you are already familiar with DeepDream, Deep Style, and all
the other latest happenings in generating images with deep neural networks.
### Background and History
On [May 18, 2015 at 2 a.m., Alexander
Mordvintsev](https://medium.com/backchannel/inside-deep-dreams-how-google-made-its-computers-go-crazy-83b9d24e66df#.g4t69y8wy),
an engineer at Google, did something with deep neural networks that no one had
done before. He took a net designed for *recognizing* objects in images and used
it to *generate* objects in images. In a sense, he was telling these systems
that mimic the human visual cortex to hallucinate things that weren't really
there. The [results](https://i.imgur.com/6ocuQsZ.jpg) looked remarkably like LSD
trips or what a [schizophrenic person sees on a blank
wall](https://www.reddit.com/r/deepdream/comments/3cewgn/an_artist_suffering_from_schizophrenia_was_told/).
Mordvintsev's discovery quickly gathered attention at Google once he posted
images from his experimentation on the company's internal network. On June 17,
2015, [Google posted a blog post about the
technique](http://googleresearch.blogspot.com/2015/06/inceptionism-going-deeper-into-neural.html)
(dubbed "Inceptionism") and how it was useful for opening up the notoriously
black-boxed neural networks using visualizations that researchers could examine.
These machine hallucinations were key for identifying the features of objects
that neural networks used to tell one object from another (like a dog from a
cat). But the post also revealed the [beautiful
results](https://goo.gl/photos/fFcivHZ2CDhqCkZdA) of applying the algorithm
iteratively on it's own outputs and zooming out at each step.
The internet exploded in response to this post. And once [Google posted the code
for performing the
technique](http://googleresearch.blogspot.com/2015/07/deepdream-code-example-for-visualizing.html?m=1),
people began experimenting and sharing [their fantastic and creepy
images](https://www.reddit.com/r/deepdream) with the world.
Then, on August, 26, 2015, a paper titled ["A Neural Algorithm of Artistic
Style"](http://arxiv.org/abs/1508.06576) was published. It showed how one could
identify which layers of deep neural networks recognized stylistic information
of an image (and not the content) and then use this stylistic information in
Google's Inceptionism technique to paint other images in the style of any
artist. A [few](https://github.com/jcjohnson/neural-style)
[implementations](https://github.com/kaishengtai/neuralart) of the paper were
put up on Github. This exploded the internet again in a frenzy. This time, the
images produced were less like psychedelic-induced nightmares but more like the
next generation of Instagram filters ([reddit
how-to](https://www.reddit.com/r/deepdream/comments/3jwl76/how_anyone_can_create_deep_style_images/)).
People began to wonder [what all of this
meant](http://www.hopesandfears.com/hopes/culture/is-this-art/215039-deep-dream-google-art)
to [the future of
art](http://kajsotala.fi/2015/07/deepdream-today-psychedelic-images-tomorrow-unemployed-artists/).
Some of the results produced where [indistinguishable from the style of dead
artists'
works](https://raw.githubusercontent.com/jcjohnson/neural-style/master/examples/outputs/tubingen_starry.png).
Was this a demonstration of creativity in computers or just a neat trick?
On November, 19, 2015, [another paper](http://arxiv.org/abs/1511.06434) was
released that demonstrated a technique for generating scenes from convolutional
neural nets ([implementation on Github](https://github.com/Newmu/dcgan_code)).
The program could generate random (and very realistic) [bedroom
images](https://github.com/Newmu/dcgan_code/raw/master/images/lsun_bedrooms_five_epoch_samples.png)
from a neural net trained on bedroom images. Amazingly, it could also generate
[the same bedroom from any
angle](https://github.com/Newmu/dcgan_code/blob/master/images/lsun_bedrooms_five_epochs_interps.png).
It could also [produce images of the same procedurally generated face from any
angle](https://github.com/Newmu/dcgan_code/blob/master/images/turn_vector.png).
Theoretically, we could use this technology to create *procedurally generated
game art*.
The main thing holding this technology back from revolutionizing procedurally
generated video games is that it is not real-time. Using
[neural-style](https://github.com/jcjohnson/neural-style) to apply artistic
style to a 512 by 512 pixel content image could take minutes even on the
top-of-the-line GTX Titan X graphics card. Still, I believe this technology has
a lot of potential for generating game art even if it can't act as a real-time
filter.
### Applications: Generating Satellite Images for Procedural World Maps
I personally know very little machine learning, but I have been able to produce
a lot of interesting results by using the tool provided by
[neural-style](https://github.com/jcjohnson/neural-style).
Inspired by [Kaelan's procedurally generated world
maps](http://blog.kaelan.org/randomly-generated-world-map/), I wanted to extend
the idea by generating realistic satellite images of the terrain maps. The
procedure is simple: take a [generated terrain map](/assets/kaelan_terrain1.png)
and apply the style of a [real-world satellite image](/assets/uk_satellite.jpg)
on it using neural-style.
![Output of generated map plus real-world satellite
imagery](/assets/satellite_terrain1_process.png)
The generated output takes on whatever terrain is in the satellite image. Here
is an output processing one of Kaelan's maps with a [arctic satellite
image](/assets/svalbard_satellite.jpg):
![Kaelan's terrain map](/assets/kaelan_terrain2.jpg)
![Output of terrain map plus arctic satellite imagery](/assets/satellite_terrain2.png)
And again, with one of Kaelan's desert maps and a [satellite image of a
desert](/assets/desert_satellite.jpg):
![Kaelan's desert terrain map](/assets/kaelan_terrain3.jpg)
![Output of terrain map plus desert satellite imagery](/assets/satellite_terrain3.png)
It even works with [Kaelan's generated hexagon
maps](http://blog.kaelan.org/hexagon-world-map-generation/). Here's an island
hexagon map plus a [satellite image of a volcanic
island](/assets/volcano_satellite.jpg):
![Kaelan's island hexagon map](/assets/kaelan_hex_terrain.jpg)
![Output of hexagon map plus island satellite
imagery](/assets/satellite_hex_terrain.png)
This image even produced an interesting three-dimensional effect because of the
volcano in the satellite image.
By the way, this also works with minecraft maps. Here's a minecraft map I found
on the internet plus a [satellite image from Google
Earth](/assets/river_satellite.png):
![Minecraft map](/assets/minecraft_map.jpg)
![Output of minecraft map plus river satellite
imagery](/assets/satellite_minecraft_map.png)
No fancy texture packs or 3-D rendering needed :).
Here is the Fallout 4 grayscale map plus a
[satellite image of Boston](/assets/boston_aerial.jpg):
![Fallout 4 grayscale map](/assets/fallout4_map.png)
![Output of Fallout 4 map plus Boston satellite
imagery](/assets/satellite_fallout4_map.png)
Unfortunately, it puts the built-up dense part of the city in the wrong part of
the geographic area. But, this is understandable since we gave the algorithm no
information on where that is on the map.
We can also make the generated terrain maps look like old hand-drawn maps using
neural-style. With Kaelan's terrain map as the
content and [the in-game Elder Scrolls IV Oblivion map of
Cyrodiil](/assets/cyrodiil_ingame.jpg) as the style we get this:
![Kaelan's terrain map](/assets/kaelan_terrain1.png)
![Output of terrain map plus map of Cyrodiil](/assets/cyrodiil_terrain1.png)
It looks cool, but the water isn't conveyed very clearly (e.g. makes deep water
look like land). Neural-style seems to work better when there is lots of color
in both images.
Here is the output of the hex terrain plus satellite map above and the Cyrodiil
map which looks a little cleaner:
![Satellite-like hex terrain map](/assets/satellite_hex_terrain.png)
![Output of hex terrain plus satellite and map of
Cyrodiil](/assets/cyrodiil_satellite_hex_terrain.png)
I was interested to see what neural-style could generate from random noise, so I
rendered some clouds in GIMP and ran it with a satellite image of [Mexico City
from Google Earth](/assets/mexico_city.jpg) (by the way, I've been getting high
quality Google Earth shots from
[earthview.withgoogle.com](https://earthview.withgoogle.com)).
![Random clouds](/assets/blurry_clouds.png)
![Output of random clouds and Mexico City](/assets/random_mexico_city.png)
Not bad for a neural net without a degree in urban planning.
I also tried generating on random noise with a satellite image of [a water
treatment plant in Peru](/assets/treatment_plant.jpg)
![Random clouds](/assets/blurry_clouds2.png)
![Output of random clouds and water treatment
plant](/assets/random_treatment_plant.png)
### Applications: More Fun
For fun, here are some other outputs that I liked.
[My photo of Boston's skyline as the content](/assets/boston_skyline.jpg) and
[Vincent van Gogh's The Starry Night as the style](/assets/starry_night.jpg):
![Output of Boston skyline and starry night](/assets/starry_boston.png)
[A photo of me](/assets/standing_forest.jpg) (by Aidan Bevacqua) and [Forrest in
the end of Autumn by Caspar David Friedrich](/assets/forrest_autumn.jpg):
![Output of me and Forrest in the end of
Autumn](/assets/dead_forest_standing.png)
[Another photo of me by Aidan](/assets/sitting_forest.jpg) in the same style:
![Output of me and Forrest in the end of Autumn](/assets/dead_forest_sitting.png)
[A photo of me on a mountain](/assets/mountain_view.jpg) (by Aidan Bevacqua) and
[pixel art by Paul Robertson](/assets/pixels.png)
![Output of me on a mountain and pixel art](/assets/mountain_view_pixels.png)
[A photo of a park in Copenhagen I took](/assets/copenhagen_park.jpg) and a
painting similar in composition, [Avenue of Poplars at Sunset by Vincent van
Gogh](/assets/avenue_poplars.jpg):
![Output of park in Copenhagen and Avenue of Poplars at
Sunset](/assets/poplars.png)
[My photo of the Shenandoah National Park](/assets/shenandoah_mountains.jpg) and
[this halo graphic from GMUNK](/assets/halo_ring_mountains.jpg)
([GMUNK](http://www.gmunk.com/filter/Interactive/ORA-Summoners-HALO)):
![Output of Shenandoah mountains and halo ring
mountains](/assets/halo_shenandoah.png)
[A photo of me by Aidan](/assets/me.png) and a [stained glass
fractal](/assets/stained_glass.jpg):
![Output of me and a stained glass fractal](/assets/stained_glass_portrait.png)
Same photo of me and some [psychedelic art by GMUNK](/assets/pockets.jpg)
![Output of me and psychedelic art](/assets/pockets_portrait.png)
[New York City](/assets/nyc.jpg) and [a rainforest](/assets/rainforest.jpg):
![Output of New York City and a rainforest](/assets/jungle_nyc.png)
[Kowloon Walled City](/assets/kowloon.jpg) and [a National Geographic
Map](/assets/ngs_map.jpg):
![Output of Kowloon and NGS map](/assets/kowloon_ngs.png)
[A photo of me by Aidan](/assets/side_portrait.jpg) and [Head of Lioness by
Theodore Gericault](/assets/head_lioness.jpg):
![Output of photo of me and ](/assets/lion_portrait.png)
[Photo I took of a Norwegian forest](/assets/forest_hill.jpg) and [The Mountain
Brook by Albert Bierstadt](/assets/mountain_brook.jpg):
![Output of Norwegian forest and The Mountain
Brook](/assets/mountain_brook_hill.png)
### Limitations
I don't have infinite money for a GTX Titan X, so I'm stuck with using OpenCL on
my more-than-a-few-generations-old AMD card. It takes about a half-hour to
generate one 512x512 px image in my set-up (which makes the feedback loop for
correcting mistakes *very* long). And sometimes the neural-style refuses to run
on my GPU (I suspect it runs out of VRAM), so I have to run it on my CPU which
takes even longer...
I am unable to generate bigger images (though
[the author has been able to generate up to 1920x1010
px](https://github.com/jcjohnson/neural-style/issues/36#issuecomment-142994812)).
As the size of the output increases the amount of memory and time to generate
also increases. And, it's not practical to just generate thumbnails to test
parameters, because increasing the image size will probably generate a very
different image since all the other parameters stay the same even though they
are dependent on the image size.
Some people have had success running these neural nets on GPU spot instances in
AWS. It would be certainly cheaper than buying a new GPU in the short-term.
So, I have a few more ideas for what to run, but it will take me quite a while
to get through the queue.

View File

@ -0,0 +1,197 @@
---
title: How to Install TensorFlow on Ubuntu 16.04 with GPU Support
layout: post
---
I found the [tensorflow
documentation](https://www.tensorflow.org/install/install_linux) rather lacking
for installation instructions, especially in regards to getting GPU support.
I'm going to write down my notes from wrangling with the installation here for
future reference and hopefully this helps someone else too.
<!--excerpt-->
This will invariably go out-of-date at some point, so be mindful of the publish
date of this post. Make sure to cross-reference other documentation that has
more up-to-date information.
## Assumptions
These instructions are very specific to my environment, so this is what I am
assuming:
1. You are running Ubuntu 16.04. (I have 16.04.1)
- You can check this in the output of `uname -a`
2. You have a 64 bit machine.
- You can check this with `uname -m`. (should say `x86_64`)
2. You have an NVIDIA GPU that has CUDA Compute Capability 3.0 or higher.
[NVIDIA documentation](https://developer.nvidia.com/cuda-gpus) has a full table
of cards and their Compute Capabilities. (I have a GeForce GTX 980 Ti)
- You can check what card you have in Settings > Details under the label
"Graphics"
- You can also check by verifying there is any output when you run `lspci |
grep -i nvidia`
3. You have a linux kernel version 4.4.0 or higher. (I have 4.8.0)
- You can check this by running `uname -r`
4. You have gcc version 5.3.1 or higher installed. (I have 5.4.0)
- You can check this by running `gcc --version`
5. You have the latest [proprietary](https://i.imgur.com/8osspXj.jpg) NVIDIA
drivers installed.
- You can check this and install it if you haven't in the "Additional
Drivers" tab in the "Software & Updates" application (`update-manager`).
(I have version 375.66 installed)
6. You have the kernel headers installed.
- Just run `sudo apt-get install linux-headers-$(uname -r)` to install them
if you don't have them installed already.
7. You have Python installed. The exact version shouldn't matter, but for the
rest of this post I'm going to assume you have `python3` installed.
- You can install `python3` by running `sudo apt-get install python3`. This
will install Python 3.5.
- Bonus points: you can install Python 3.6 by following [this
answer](https://askubuntu.com/a/865569), but Python 3.5 should be fine.
## Install the CUDA Toolkit 8.0
NVIDIA has [a big scary documentation
page](http://docs.nvidia.com/cuda/cuda-installation-guide-linux/) on this, but I
will summarize the only the parts you need to know here.
Go to the [CUDA Toolkit Download](https://developer.nvidia.com/cuda-downloads)
page. Click Linux > x86_64 > Ubuntu > 16.04 > deb (network).
Click download and then follow the instructions, copied here:
1. `sudo dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb`
2. `sudo apt-get update`
3. `sudo apt-get install cuda`
This will install CUDA 8.0. It installed it to the directory
`/usr/local/cuda-8.0/` on my machine.
There are some [post-install
actions](http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions)
we must follow:
1. Edit your `~/.bashrc`
- Use your favorite editor `gedit ~/.bashrc`, `nano ~/.bashrc`, `vim
~/.bashrc`, whatever.
2. Add the following lines to the end of the file:
```bash
# CUDA 8.0 (nvidia) paths
export CUDA_HOME=/usr/local/cuda-8.0
export PATH=/usr/local/cuda-8.0/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
```
3. Save and exit.
4. Run `source ~/.bashrc`.
5. Install writable samples by running the script `cuda-install-samples-8.0.sh
~/`.
- If the script cannot be found, the above steps didn't work :(
- I don't actually know if the samples are absolutely required for what I'm
using CUDA for, but it's recommended according to NVIDIA, and compiling
them will output a nifty `deviceQuery` binary which can be ran to test if
everything is working properly.
6. Make sure `nvcc -V` outputs something.
- If an error, the above steps 1-4 didn't work :(
7. `cd ~/NVIDIA_CUDA-8.0_Samples`, cross your fingers, and run `make`
- The compile will take a while
- My compile actually errored near the end with an error about `/usr/bin/ld:
cannot find -lnvcuvid` I *think* that doesn't really matter because the
binary files were still output.
8. Try running `~/NVIDIA_CUDA-8.0_Samples/bin/x86_64/linux/release/deviceQuery`
to see if you get any output. Hopefully you will see your GPU listed.
## Install cuDNN v5.1
[This AskUbuntu answer](https://askubuntu.com/a/767270) has good instructions.
Here are the instructions specific to this set-up:
1. Visit the [NVIDIA cuDNN page](https://developer.nvidia.com/cudnn) and click
"Download".
2. Join the program and fill out the survey.
3. Agree to the terms of service.
4. Click the link for "Download cuDNN v5.1 (Jan 20, 2017), for CUDA 8.0"
5. Download the "cuDNN v5.1 Library for Linux" (3rd link from the top).
6. Untar the downloaded file. E.g.:
```bash
cd ~/Downloads
tar -xvf cudnn-8.0-linux-x64-v5.1.tgz
```
7. Install the cuDNN files to the CUDA folder:
```bash
cd cuda
sudo cp -P include/* /usr/local/cuda-8.0/include/
sudo cp -P lib64/* /usr/local/cuda-8.0/lib64/
sudo chmod a+r /usr/local/cuda-8.0/lib64/libcudnn*
```
## Install libcupti-dev
This one is simple. Just run:
```bash
sudo apt-get install libcupti-dev
```
## Create a Virtualenv
I recommend using
[virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/index.html)
to create the tensorflow virtualenv, but the TensorFlow docs still have
[instructions to create the virtualenv
manually](https://www.tensorflow.org/install/install_linux#InstallingVirtualenv).
1. [Install
virtualenvwrapper](https://virtualenvwrapper.readthedocs.io/en/latest/install.html).
Make sure to add [the required
lines](https://virtualenvwrapper.readthedocs.io/en/latest/install.html#shell-startup-file)
to your `~/.bashrc`.
2. Create the virtualenv:
```bash
mkvirtualenv --python=python3 tensorflow
```
## Install the TensorFlow with GPU support
If you just run `pip install tensorflow` you will not get GPU support. To
install the correct version you will have to install from a [particular
url](https://www.tensorflow.org/install/install_linux#python_35). Here is the
install command you will have to run to install TensorFlow 1.2 for Python 3.5
with GPU support:
```bash
pip install https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.0-cp35-cp35m-linux_x86_64.whl
```
If you need a different version of TensorFlow, you can edit the version number
in the URL. Same with the Python version (change `cp35` to `cp36` to install for
Python 3.6 instead, for example).
## Test that the installation worked
Save this script from [the TensorFlow
tutorials](https://www.tensorflow.org/tutorials/using_gpu#logging_device_placement)
to a file called `test_gpu.py`:
```python
# Creates a graph.
with tf.device('/cpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))
```
And then run it:
```bash
python test_gpu.py
```
You should see your GPU card listed under "Device mapping:" and that each task
in the compute graph is assigned to `gpu:0`.
If you see "Device mapping: no known devices" then something went wrong and
TensorFlow cannot access your GPU.

View File

@ -0,0 +1,516 @@
---
title: Generating Random Poems with Python
layout: post
image: /img/blog/buzzfeed.jpg
---
In this post, I will demonstrate how to generate random text using a few lines
of standard python and then progressively refine the output until it looks
poem-like.
If you would like to follow along with this post and run the code snippets
yourself, you can clone [my NLP repository](https://github.com/thallada/nlp/)
and run [the Jupyter
notebook](https://github.com/thallada/nlp/blob/master/edX%20Lightning%20Talk.ipynb).
You might not realize it, but you probably use an app everyday that can generate
random text that sounds like you: your phone keyboard.
<!--excerpt-->
![Suggested next words UI feature on the iOS
keyboard](/img/blog/phone_keyboard.jpg)
Just by tapping the next suggested word over and over, you can generate text. So how does it work?
## Corpus
First, we need a **corpus**: the text our generator will recombine into new
sentences. In the case of your phone keyboard, this is all the text you've ever
typed into your keyboard. For our example, let's just start with one sentence:
```python
corpus = 'The quick brown fox jumps over the lazy dog'
```
## Tokenization
Now we need to split this corpus into individual **tokens** that we can operate
on. Since our objective is to eventually predict the next word from the previous
word, we will want our tokens to be individual words. This process is called
**tokenization**. The simplest way to tokenize a sentence into words is to split
on spaces:
```python
words = corpus.split(' ')
words
```
```python
['The', 'quick', 'brown', 'fox', 'jumps', 'over', 'the', 'lazy', 'dog']
```
## Bigrams
Now, we will want to create **bigrams**. A bigram is a pair of two words that
are in the order they appear in the corpus. To create bigrams, we will iterate
through the list of the words with two indices, one of which is offset by one:
```python
bigrams = [b for b in zip(words[:-1], words[1:])]
bigrams
```
```python
[('The', 'quick'),
('quick', 'brown'),
('brown', 'fox'),
('fox', 'jumps'),
('jumps', 'over'),
('over', 'the'),
('the', 'lazy'),
('lazy', 'dog')]
```
How do we use the bigrams to predict the next word given the first word?
Return every second element where the first element matches the **condition**:
```python
condition = 'the'
next_words = [bigram[1] for bigram in bigrams
if bigram[0].lower() == condition]
next_words
```
```python
['quick', 'lazy']
```
We have now found all of the possible words that can follow the condition "the"
according to our corpus: "quick" and "lazy".
<pre>
(<span style="color:blue">The</span> <span style="color:red">quick</span>) (quick brown) ... (<span style="color:blue">the</span> <span style="color:red">lazy</span>) (lazy dog)
</pre>
Either "<span style="color:red">quick</span>" or "<span
style="color:red">lazy</span>" could be the next word.
## Trigrams and N-grams
We can partition our corpus into groups of threes too:
<pre>
(<span style="color:blue">The</span> <span style="color:red">quick brown</span>) (quick brown fox) ... (<span style="color:blue">the</span> <span style="color:red">lazy dog</span>)
</pre>
Or, the condition can be two words (`condition = 'the lazy'`):
<pre>
(The quick brown) (quick brown fox) ... (<span style="color:blue">the lazy</span> <span style="color:red">dog</span>)
</pre>
These are called **trigrams**.
We can partition any **N** number of words together as **n-grams**.
## Conditional Frequency Distributions
Earlier, we were able to compute the list of possible words to follow a
condition:
```python
next_words
```
```python
['quick', 'lazy']
```
But, in order to predict the next word, what we really want to compute is what
is the most likely next word out of all of the possible next words. In other
words, find the word that occurred the most often after the condition in the
corpus.
We can use a **Conditional Frequency Distribution (CFD)** to figure that out! A
**CFD** can tell us: given a **condition**, what is **likelihood** of each
possible outcome.
This is an example of a CFD with two conditions, displayed in table form. It is
counting words appearing in a text collection (source: nltk.org).
![Two tables, one for each condition: "News" and "Romance". The first column of
each table is 5 words: "the", "cute", "Monday", "could", and "will". The second
column is a tally of how often the word at the start of the row appears in the
corpus.](http://www.nltk.org/images/tally2.png)
Let's change up our corpus a little to better demonstrate the CFD:
```python
words = ('The quick brown fox jumped over the '
'lazy dog and the quick cat').split(' ')
print words
```
```python
['The', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog', 'and', 'the', 'quick', 'cat']
```
Now, let's build the CFD. I use
[`defaultdicts`](https://docs.python.org/2/library/collections.html#defaultdict-objects)
to avoid having to initialize every new dict.
```python
from collections import defaultdict
cfd = defaultdict(lambda: defaultdict(lambda: 0))
for i in range(len(words) - 2): # loop to the next-to-last word
cfd[words[i].lower()][words[i+1].lower()] += 1
# pretty print the defaultdict
{k: dict(v) for k, v in dict(cfd).items()}
```
```python
{'and': {'the': 1},
'brown': {'fox': 1},
'dog': {'and': 1},
'fox': {'jumped': 1},
'jumped': {'over': 1},
'lazy': {'dog': 1},
'over': {'the': 1},
'quick': {'brown': 1},
'the': {'lazy': 1, 'quick': 2}}
```
So, what's the most likely word to follow `'the'`?
```python
max(cfd['the'])
```
```python
'quick'
```
Whole sentences can be the conditions and values too. Which is basically the way
[cleverbot](http://www.cleverbot.com/) works.
![An example of a conversation with Cleverbot](/img/blog/cleverbot.jpg)
## Random Text
Lets put this all together, and with a little help from
[nltk](http://www.nltk.org/) generate some random text.
```python
import nltk
import random
TEXT = nltk.corpus.gutenberg.words('austen-emma.txt')
# NLTK shortcuts :)
bigrams = nltk.bigrams(TEXT)
cfd = nltk.ConditionalFreqDist(bigrams)
# pick a random word from the corpus to start with
word = random.choice(TEXT)
# generate 15 more words
for i in range(15):
print word,
if word in cfd:
word = random.choice(cfd[word].keys())
else:
break
```
Which outputs something like:
```
her reserve and concealment towards some feelings in moving slowly together .
You will shew
```
Great! This is basically what the phone keyboard suggestions are doing. Now how
do we take this to the next level and generate text that looks like a poem?
## Random Poems
Generating random poems is accomplished by limiting the choice of the next word
by some constraint:
* words that rhyme with the previous line
* words that match a certain syllable count
* words that alliterate with words on the same line
* etc.
## Rhyming
### Written English != Spoken English
English has a highly **nonphonemic orthography**, meaning that the letters often
have no correspondence to the pronunciation. E.g.:
> "meet" vs. "meat"
The vowels are spelled differently, yet they rhyme [^1].
So if the spelling of the words is useless in telling us if two words rhyme,
what can we use instead?
### International Phonetic Alphabet (IPA)
The IPA is an alphabet that can represent all varieties of human pronunciation.
* meet: /mit/
* meat: /mit/
Note that this is only the IPA transcription for only one **accent** of English.
Some English speakers may pronounce these words differently which could be
represented by a different IPA transcription.
## Syllables
How can we determine the number of syllables in a word? Let's consider the two
words "poet" and "does":
* "poet" = 2 syllables
* "does" = 1 syllable
The vowels in these two words are written the same, but are pronounced
differently with a different number of syllables.
Can the IPA tell us the number of syllables in a word too?
* poet: /ˈpoʊət/
* does: /ˈdʌz/
Not really... We cannot easily identify the number of syllables from those
transcriptions. Sometimes the transcriber denotes syllable breaks with a `.` or
a `'`, but sometimes they don't.
### Arpabet
The Arpabet is a phonetic alphabet developed by ARPA in the 70s that:
* Encodes phonemes specific to American English.
* Meant to be a machine readable code. It is ASCII only.
* Denotes how stressed every vowel is from 0-2.
This is perfect! Because of that third bullet, a word's syllable count equals
the number of digits in the Arpabet encoding.
### CMU Pronouncing Dictionary (CMUdict)
A large open source dictionary of English words to North American pronunciations
in Arpanet encoding. Conveniently, it is also in NLTK...
### Counting Syllables
```python
import string
from nltk.corpus import cmudict
cmu = cmudict.dict()
def count_syllables(word):
lower_word = word.lower()
if lower_word in cmu:
return max([len([y for y in x if y[-1] in string.digits])
for x in cmu[lower_word]])
print("poet: {}\ndoes: {}".format(count_syllables("poet"),
count_syllables("does")))
```
Results in:
```
poet: 2
does: 1
```
## Buzzfeed Haiku Generator
To see this in action, try out a haiku generator I created that uses Buzzfeed
article titles as a corpus. It does not incorporate rhyming, it just counts the
syllables to make sure it's [5-7-5](https://en.wikipedia.org/wiki/Haiku). You can view the full code
[here](https://github.com/thallada/nlp/blob/master/generate_poem.py).
![Buzzfeed Haiku Generator](/img/blog/buzzfeed.jpg)
Run it live at:
[http://mule.hallada.net/nlp/buzzfeed-haiku-generator/](http://mule.hallada.net/nlp/buzzfeed-haiku-generator/)
## Syntax-aware Generation
Remember these?
![Example Mad Libs: "A Visit to the Dentist"](/img/blog/madlibs.jpg)
Mad Libs worked so well because they forced the random words (chosen by the
players) to fit into the syntactical structure and parts-of-speech of an
existing sentence.
You end up with **syntactically** correct sentences that are **semantically**
random. We can do the same thing!
### NLTK Syntax Trees!
NLTK can parse any sentence into a [syntax
tree](http://www.nltk.org/book/ch08.html). We can utilize this syntax tree
during poetry generation.
```python
from stat_parser import Parser
parsed = Parser().parse('The quick brown fox jumps over the lazy dog.')
print parsed
```
Syntax tree output as an
[s-expression](https://en.wikipedia.org/wiki/S-expression):
```
(S
(NP (DT the) (NN quick))
(VP
(VB brown)
(NP
(NP (JJ fox) (NN jumps))
(PP (IN over) (NP (DT the) (JJ lazy) (NN dog)))))
(. .))
```
```python
parsed.pretty_print()
```
And the same tree visually pretty printed in ASCII:
```
S
________________________|__________________________
| VP |
| ____|_____________ |
| | NP |
| | _________|________ |
| | | PP |
| | | ________|___ |
NP | NP | NP |
___|____ | ___|____ | _______|____ |
DT NN VB JJ NN IN DT JJ NN .
| | | | | | | | | |
the quick brown fox jumps over the lazy dog .
```
NLTK also performs [part-of-speech tagging](http://www.nltk.org/book/ch05.html)
on the input sentence and outputs the tag at each node in the tree. Here's what
each of those mean:
|**S** | Sentence |
|**VP** | Verb Phrase |
|**NP** | Noun Phrase |
|**DT** | Determiner |
|**NN** | Noun (common, singular) |
|**VB** | Verb (base form) |
|**JJ** | Adjective (or numeral, ordinal) |
|**.** | Punctuation |
Now, let's use this information to swap matching syntax sub-trees between two
corpora ([source for the generate
function](https://github.com/thallada/nlp/blob/master/syntax_aware_generate.py)).
```python
from syntax_aware_generate import generate
# inserts matching syntax subtrees from trump.txt into
# trees from austen-emma.txt
generate('trump.txt', word_limit=10)
```
```
(SBARQ
(SQ
(NP (PRP I))
(VP (VBP do) (RB not) (VB advise) (NP (DT the) (NN custard))))
(. .))
I do not advise the custard .
==============================
I do n't want the drone !
(SBARQ
(SQ
(NP (PRP I))
(VP (VBP do) (RB n't) (VB want) (NP (DT the) (NN drone))))
(. !))
```
Above the line is a sentence selected from a corpus of Jane Austen's *Emma*.
Below it is a sentence generated by walking down the syntax tree and finding
sub-trees from a corpus of Trump's tweets that match the same syntactical
structure and then swapping the words in.
The result can sometimes be amusing, but more often than not, this approach
doesn't fare much better than the n-gram based generation.
### spaCy
I'm only beginning to experiment with the [spaCy](https://spacy.io/) Python
library, but I like it a lot. For one, it is much, much faster than NLTK:
![spaCy speed comparison](/img/blog/spacy_speed.jpg)
[https://spacy.io/docs/api/#speed-comparison](https://spacy.io/docs/api/#speed-comparison)
The [API](https://spacy.io/docs/api/) takes a little getting used to coming from
NLTK. It doesn't seem to have any sort of out-of-the-box solution to printing
out syntax trees like above, but it does do [part-of-speech
tagging](https://spacy.io/docs/api/tagger) and [dependency relation
mapping](https://spacy.io/docs/api/dependencyparser) which should accomplish
about the same. You can see both of these visually with
[displaCy](https://demos.explosion.ai/displacy/).
## Neural Network Based Generation
If you haven't heard all the buzz about [neural
networks](https://en.wikipedia.org/wiki/Artificial_neural_network), they are a
particular technique for [machine
learning](https://en.wikipedia.org/wiki/Machine_learning) that's inspired by our
understanding of the human brain. They are structured into layers of nodes which
have connections to other nodes in other layers of the network. These
connections have weights which each node multiplies by the corresponding input
and enters into a particular [activation
function](https://en.wikipedia.org/wiki/Activation_function) to output a single
number. The optimal weights for solving a particular problem with the network
are learned by training the network using
[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) to perform
[gradient descent](https://en.wikipedia.org/wiki/Gradient_descent) on a
particular [cost function](https://en.wikipedia.org/wiki/Loss_function) that
tries to balance getting the correct answer while also
[generalizing](https://en.wikipedia.org/wiki/Regularization_(mathematics)) the
network enough to perform well on data the network hasn't seen before.
[Long short-term memory
(LSTM)](https://en.wikipedia.org/wiki/Long_short-term_memory) is a type of
[recurrent neural network
(RNN)](https://en.wikipedia.org/wiki/Recurrent_neural_network) (a network with
cycles) that can remember previous values for a short or long period of time.
This property makes them remarkably effective at a multitude of tasks, one of
which is predicting text that will follow a given sequence. We can use this to
continually generate text by inputting a seed, appending the generated output to
the end of the seed, removing the first element from the beginning of the seed,
and then inputting the seed again, following the same process until we've
generated enough text from the network ([paper on using RNNs to generate
text](http://www.cs.utoronto.ca/~ilya/pubs/2011/LANG-RNN.pdf)).
Luckily, a lot of smart people have done most of the legwork so you can just
download their neural network architecture and train it yourself. There's
[char-rnn](https://github.com/karpathy/char-rnn) which has some [really exciting
results for generating texts (e.g. fake
Shakespeare)](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). There's
also [word-rnn](https://github.com/larspars/word-rnn) which is a modified
version of char-rnn that operates on words as a unit instead of characters.
Follow [my last blog post on how to install TensorFlow on Ubuntu
16.04](/2017/06/20/how-to-install-tensorflow-on-ubuntu-16-04.html) and
you'll be almost ready to run a TensorFlow port of word-rnn:
[word-rnn-tensorflow](https://github.com/hunkim/word-rnn-tensorflow).
I plan on playing around with NNs a lot more to see what kind of poetry-looking
text I can generate from them.
---
[^1]:
Fun fact: They used to be pronounced differently in Middle English during
the invention of the printing press and standardized spelling. The [Great
Vowel Shift](https://en.wikipedia.org/wiki/Great_Vowel_Shift) happened
after, and is why they are now pronounced the same.

View File

@ -0,0 +1,76 @@
---
title: "Proximity Structures: Playing around with PixiJS"
layout: post
image: /img/blog/proximity-structures.png
---
I've been messing around with a library called [PixiJS](http://www.pixijs.com/)
which allows you to create WebGL animations which will fall back to HTML5 canvas
if WebGL is not available in the browser. I mostly like it because the API is
similar to HTML5 canvas which [I was already familiar
with](https://github.com/thallada/thallada.github.io/blob/master/js/magic.js). I
can't say that I like the PixiJS API and documentation that much, though. For
this project, I mostly just used a small portion of it to create [WebGL (GPU
accelerated) primitive
shapes](http://www.goodboydigital.com/pixi-webgl-primitives/) (lines and
circles).
<!--excerpt-->
**Play with it here**: [http://proximity.hallada.net](http://proximity.hallada.net)
**Read/clone the code here**: [https://github.com/thallada/proximity-structures](https://github.com/thallada/proximity-structures)
[![The animation in
action](/img/blog/proximity-structures.gif)](http://proximity.hallada.net)
The idea was inspired by
[all](https://thumb9.shutterstock.com/display_pic_with_logo/3217643/418838422/stock-vector-abstract-technology-futuristic-network-418838422.jpg)
[those](https://ak5.picdn.net/shutterstock/videos/27007555/thumb/10.jpg)
[countless](https://ak9.picdn.net/shutterstock/videos/10477484/thumb/1.jpg)
[node](https://ak3.picdn.net/shutterstock/videos/25825727/thumb/1.jpg)
[network](https://t4.ftcdn.net/jpg/00/93/24/21/500_F_93242102_mqtDljufY7CNY0wMxunSbyDi23yNs1DU.jpg)
[graphics](https://ak6.picdn.net/shutterstock/videos/12997085/thumb/1.jpg) that
I see all the time as stock graphics on generic tech articles.
This was really fun to program. I didn't care much about perfect code, I just
kept hacking one thing onto another while watching the instantaneous feedback of
the points and lines responding to my changes until I had something worth
sharing.
### Details
The majority of the animation you see is based on
[tweening](https://en.wikipedia.org/wiki/Inbetweening). Each point has an origin
and destination stored in memory. Every clock tick (orchestrated by the almighty
[requestAnimationFrame](https://developer.mozilla.org/en-US/docs/Web/API/window/requestAnimationFrame)),
the main loop calculates where each point should be in the path between its
origin and destination based on how long until it completes its "cycle". There
is a global `cycleDuration`, defaulted to 60. Every frame increments the cycle
counter by 1 until it reaches 60, at which point it folds over back to 0. Every
point is assigned a number between 1 and 60. This is its start cycle. When the
global cycle counter equals a point's start cycle number, the point has reached
its destination and a new target destination is randomly chosen.
Each point is also randomly assigned a color. When a point is within
`connectionDistance` of another point in the canvas, a line is drawn between the
two points, their colors are averaged, and the points' colors become the average
color weighted by the distance between the points. You can see clusters of
points converging on a color in the animation.
Click interaction is implemented by modifying point target destinations within a
radius around the click. Initially, a mouse hover will push points away.
Clicking and holding will draw points in, progressively growing the effect
radius in the process to capture more and more points.
I thought it was really neat that without integrating any physics engine
whatsoever, I ended up with something that looked sort of physics based thanks
to the tweening functions. Changing the tweening functions that the points use
seems to change the physical properties and interactions of the points. The
elastic tweening function makes the connections between the points snap like
rubber bands. And, while I am not drawing any explicit polygons, just points and
lines based on proximity, it sometimes looks like the points are coalescing into
some three-dimensional structure.
I'll probably make another procedural animation like this in the future since it
was so fun. Next time, I'll probably start from the get-go in ES2015 (or ES7,
or ES8??) and proper data structures.

View File

@ -0,0 +1,185 @@
---
title: "Making a Mailing List for a Jekyll Blog Using Sendy"
layout: post
---
When my beloved [Google Reader](https://en.wikipedia.org/wiki/Google_Reader) was
discontinued in 2013, I stopped regularly checking RSS feeds. Apparently, [I am
not alone](https://trends.google.com/trends/explore?date=all&q=rss). It seems
like there's a new article every month arguing either that [RSS is dead or RSS
is not dead
yet](https://hn.algolia.com/?q=&query=rss%20dead&sort=byPopularity&prefix&page=0&dateRange=all&type=story).
Maybe RSS will stick around to serve as a cross-site communication backbone, but
I don't think anyone will refute that RSS feeds are declining in consumer use.
Facebook, Twitter, and other aggregators are where people really go. However, I
noticed that I still follow some small infrequent blogs through mailing lists
that they offer. I'm really happy to see an email sign up on blogs I like,
because it means I'll know when they post new content in the future. I check my
email regularly unlike my RSS feeds.
<!--excerpt-->
Even though I'm sure my blog is still too uninteresting and unheard of to get
many signups, I still wanted to know what it took to make a blog mailing list.
RSS is super simple for website owners, because all they need to do is dump all
of their content into a specially formatted XML file, host it, and let RSS
readers deal with all the complexity. In my blog, [I didn't even need a
Jekyll
plugin](https://github.com/thallada/thallada.github.io/blob/master/feed.xml).
Email is significantly more difficult. With email, the website owner owns more
of the complexity. And, spam filters make it unfeasible to roll your own email
server. A couple people can mark you as spam, and BAM: now you are blacklisted
and you have to move to a new IP address. This is why most people turn to a
hosted service like [Mailchimp](https://mailchimp.com/). Though, I was
dissatisfied with that because of the [high costs and measly free
tier](https://mailchimp.com/pricing/).
[Amazon Simple Email Service (SES)](https://aws.amazon.com/ses/) deals with all
the complexity of email for you and is also
[cheap](https://aws.amazon.com/ses/pricing/). In fact, it's free unless you have
more than 62,000 subscribers or post way more than around once a month, and even
after that it's a dime for every 1,000 emails sent. Frankly, no one can really
compete with what Amazon is offering here.
Okay, so that covers sending the emails, but what about collecting and storing
subscriptions? SES doesn't handle any of that. I searched around a long time for
something simple and free that wouldn't require me setting up a server [^1]. I
eventually ended up going with [Sendy](https://sendy.co/) because it looked like
a well-designed product exactly for this use case that also handled drafting
emails, email templates, confirmation emails, and analytics. It costs a one-time
fee of $59 and I was willing to fork that over for quality software. Especially
since most other email newsletter services require some sort of monthly
subscription that scales with the number of emails you are sending.
Unfortunately, since Sendy is self-hosted, I had to add a dynamic server to my
otherwise completely static Jekyll website hosted for free on [Github
Pages](https://pages.github.com/). You can put Sendy on pretty much anything
that runs PHP and MySQL including the cheap [t2.micro Amazon EC2 instance
type](https://aws.amazon.com/ec2/instance-types/). If you are clever, you might
find a cheaper way. I already had a t2.medium for general development,
tinkering, and hosting, so I just used that.
There are many guides out there for setting up MySQL and Apache, so I won't go
over that. But, I do want to mention how I got Sendy to integrate with
[nginx](https://nginx.org/en/) which is the server engine I was already using. I
like to put separate services I'm running under different subdomains of
my domain hallada.net even though they are running on the same server and IP
address. For Sendy, I chose [list.hallada.net](http://list.hallada.net) [^2].
Setting up another subdomain in nginx requires [creating a new server
block](https://askubuntu.com/a/766369). There's [a great Gist of a config for
powering Sendy using nginx and
FastCGI](https://gist.github.com/refringe/6545132), but I ran into so many
issues with the subdomain that I decided to use nginx as a proxy to the Apache
mod_php site running Sendy. I'll just post my config here:
```nginx
server {
listen 80;
listen [::]:80;
server_name list.hallada.net;
root /var/www/html/sendy;
index index.php;
location /l/ {
rewrite ^/l/([a-zA-Z0-9/]+)$ /l.php?i=$1 last;
}
location /t/ {
rewrite ^/t/([a-zA-Z0-9/]+)$ /t.php?i=$1 last;
}
location /w/ {
rewrite ^/w/([a-zA-Z0-9/]+)$ /w.php?i=$1 last;
}
location /unsubscribe/ {
rewrite ^/unsubscribe/(.*)$ /unsubscribe.php?i=$1 last;
}
location /subscribe/ {
rewrite ^/subscribe/(.*)$ /subscribe.php?i=$1 last;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8080/sendy/;
}
}
```
Basically, this proxies all of the requests through to Apache which I configured
to run on port 8080 by changing the `Listen` directive in
`/etc/apache2/ports.conf`.
I also had to add `RewriteBase /sendy` to the end of the `.htcaccess` file in
the sendy directory (which, for me, was in `/var/www/html/sendy`). This
basically forces Sendy to use urls that start with `http://list.hallada.net`
instead of `http://list.hallada.net/sendy` which I thought was redundant since I
am dedicating the whole subdomain to sendy.
A perplexing issue I ran into was that Gmail accounts were completely dropping
(not even bouncing!) any emails I sent to them if I used my personal email
`tyler@hallada.net` as the from address. I switched to `tyhallada@gmail.com` for
the from address and emails went through fine after that [^4]. [The issue seems
unresolved](https://forums.aws.amazon.com/thread.jspa?messageID=802461&#802461)
as of this post.
Lastly, I needed to create a form on my website for readers to sign up for the
mailing list. Sendy provides the HTML in the UI to create the form, which I
[tweaked a
little](https://github.com/thallada/thallada.github.io/blob/master/_includes/mail-form.html)
and placed in a [Jekyll includes template
partial](https://jekyllrb.com/docs/includes/) that I could include on both the
post layout and the blog index template. I refuse to pollute the internet with
yet another annoying email newsletter form that pops up while you are trying to
read the article, so you can find my current version at the bottom of this
article where it belongs [^5].
All in all, setting up a mailing list this way wasn't too bad except for the part
where I spent way too much time fiddling with nginx configs. But, I always do
that, so I guess that's expected.
As for the content of the newsletter, I haven't figured out how to post the
entirety of a blog post into the HTML format of an email as soon as I commit a
new post yet. So, I think for now I will just manually create a new email
campaign in Sendy (from an email template) that will have a link to the new
post, and send that.
---
[^1]:
It would be interesting to look into creating a [Google
Form](https://www.google.com/forms/about/) that submits rows to a [Google
Sheet](https://www.google.com/sheets/about/) and then triggering a [AWS
Lambda](https://aws.amazon.com/lambda/) service that iterates over the rows
using something like [the Google Sheets Python
API](https://developers.google.com/sheets/api/quickstart/python) and sending
an email for every user using the [Amazon SES
API](http://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email-api.html)
([python-amazon-ses-api](https://github.com/pankratiev/python-amazon-ses-api)
might also be useful there).
[^2]:
I ran into a hiccup [verifying this domain for Amazon
SES](http://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-domain-procedure.html)
using the [Namecheap](https://www.namecheap.com/) advanced DNS settings
because it only allowed me to set up one MX record, but I already had one
for my root hallada.net domain that I needed. So, I moved to [Amazon's Route
53](https://aws.amazon.com/route53/) instead [^3] which made setting up the
[DKIM
verification](http://docs.aws.amazon.com/ses/latest/DeveloperGuide/easy-dkim.html)
really easy since Amazon SES gave a button to create the necessary DNS
records directly in my Route 53 account.
[^3]:
As [Amazon continues its plan for world
domination](https://www.washingtonpost.com/business/is-amazon-getting-too-big/2017/07/28/ff38b9ca-722e-11e7-9eac-d56bd5568db8_story.html)
it appears I'm moving more and more of my personal infrastructure over to
Amazon as well...
[^4]: Obviously a conspiracy by Google to force domination of Gmail.
[^5]: Yes, I really hate those pop-ups.

View File

@ -0,0 +1,259 @@
---
title: "Isso Comments"
layout: post
---
I've been meaning to add a commenting system to this blog for a while, but I
couldn't think of a good way to do it. I implemented my own commenting system on
my [old Django personal site](https://github.com/thallada/personalsite). While I
enjoyed working on it at the time, it was a lot of work, especially to fight the
spam. Now that my blog is hosted statically on Github's servers, I have no way
to host something dynamic like comments.
<!--excerpt-->
[Disqus](http://disqus.com/) seems to be the popular solution to this problem
for other people that host static blogs. The way it works is that you serve a
javascript client script on the static site you own. The script will make AJAX
requests to a separate server that Disqus owns to retrieve comments and post new
ones.
The price you pay for using Disqus, however, is that [they get to sell all of
the data that you and your commenters give
them](https://replyable.com/2017/03/disqus-is-your-data-worth-trading-for-convenience/).
That reason, plus the fact that I wanted something more DIY, meant this blog has
gone without comments for a few years.
Then I discovered [Isso](https://github.com/posativ/isso). Isso calls itself a
lightweight alternative to [Disqus](http://disqus.com/). Isso allows you to
install the server code on your own server so that the comment data never goes
to a third party. Also, it does not require logging into some social media
account just to comment. Today, I installed it on my personal AWS EC2 instance
and added the Isso javascript client script on this blog. So far, my experience
with it has been great and it performs exactly the way I expect.
I hit a few snags while installing it, however.
## Debian Package
**I don't recommend using the Debian package anymore as it frequently goes out
of date and breaks on distribution upgrades. See bottom edit.**
There is a very handy [Debian package](https://github.com/jgraichen/debian-isso)
that someone has made for Isso. Since my server runs Ubuntu 16.04, and Ubuntu is
based off of Debian, this is a package I can install with my normal ubuntu
package manager utilities. There is no PPA to install since the package is in
the [main Ubuntu package archive](https://packages.ubuntu.com/xenial/isso). Just
run `sudo apt-get install isso`.
I got a bit confused after that point, though. There seems to be no
documentation I could find about how to actually configure and start the server
once you have installed it. This is what I did:
```bash
sudo cp /etc/default/isso /etc/isso.d/available/isso.cfg
sudo ln -s /etc/isso.d/available/isso.cfg /etc/isso.d/enabled/isso.cfg
```
Then you can edit `/etc/isso.d/available/isso.cfg` with your editor of choice to
[configure the Isso server for your
needs](https://posativ.org/isso/docs/configuration/server/). Make sure to set
the `host` variable to the URL for your static site.
Once you're done, you can run `sudo service isso restart` to reload the server
with the new configuration. `sudo service isso status` should report `Active
(running)`.
Right now, there should be a [gunicorn](http://gunicorn.org/) process running
the isso server. You can check that with `top` or running `ps aux | grep
gunicorn`, which should return something about "isso".
## Nginx Reverse Proxy
In order to map the URL "comments.hallada.net" to this new gunicorn server, I
need an [nginx reverse
proxy](https://www.nginx.com/resources/admin-guide/reverse-proxy/).
To do that, I made a new server block: `sudo vim
/etc/nginx/sites-available/isso` which I added:
```nginx
server {
listen 80;
listen [::]:80;
server_name comments.hallada.net;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Script-Name /isso;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8000;
}
}
```
Then I enabled this new server block with:
```bash
sudo ln -s /etc/nginx/sites-available/isso /etc/nginx/sites-enabled/isso
sudo systemctl restart nginx
```
## DNS Configuration
I added a new A record for "comments.hallada.net" that pointed to my server's IP
address to the DNS configuration for my domain (which I recently switched to
[Amazon Route 53](https://aws.amazon.com/route53/)).
After the DNS caches had time to refresh, visiting `http://comments.hallada.net`
would hit the new `isso` nginx server block, which would then pass the request
on to the gunicorn process.
You can verify if nginx is getting the request by looking at
`/var/log/nginx/access.log`.
## Adding the Isso Script to my Jekyll Site
I created a file called `_includes/comments.html` with the contents that [the
Isso documentation](https://posativ.org/isso/docs/quickstart/#integration)
provides. Then, in my post template, I simply included that on the page where I
wanted the comments to go:
```html
{% include comments.html %}
```
Another thing that was not immediately obvious to me is that the value of the
`name` variable in the Isso server configuration is the URL path that you will
need to point the Isso JavaScript client to. For example, I chose `name = blog`,
so the `data-isso` attribute on the script tag needed to be
`http://comments.hallada.net/blog/`.
## The Uncaught ReferenceError
**You won't need to fix this if you install Isso from PIP! See bottom edit.**
There's [an issue](https://github.com/posativ/isso/issues/318) with that Debian
package that causes a JavaScript error in the console when trying to load the
Isso script in the browser. I solved this by uploading the latest version of the
Isso `embeded.min.js` file to my server, which I put at
`/var/www/html/isso/embeded.min.js`. Then I modified the nginx server block to
serve that file when the path matches `/isso`:
```nginx
server {
listen 80;
listen [::]:80;
server_name comments.hallada.net;
root /var/www/html;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Script-Name /isso;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8000;
}
location /isso {
try_files $uri $uri/ $uri.php?$args =404;
}
}
```
Now requesting `http://comments.hallada.net/isso/embeded.min.js` would return
the newer script without the bug.
## Sending Emails Through Amazon Simple Email Service
I already set up [Amazon's SES](https://aws.amazon.com/ses/) in my [last
blog
post](http://www.hallada.net/2017/08/30/making-mailing-list-jekyll-blog-using-sendy.html).
To get Isso to use SES to send notifications about new comments, create a new
credential in the SES UI, and then set the `user` and `password` fields in the
`isso.cfg` to what get's generated for the IAM user. The SES page also has
information for what `host` and `port` to use. I used `security = starttls` and
`port = 587`. Make sure whatever email you use for `from` is a verified email in
SES. Also, don't forget to add your email as the `to` value.
## Enabling HTTPS with Let's Encrypt
[Let's Encrypt](https://letsencrypt.org/) allows you to get SSL certificates for
free! I had already installed the certbot/letsencrypt client before, so I just
ran this to generate a new certificate for my new sub-domain
"comments.hallada.net":
```bash
sudo letsencrypt certonly --nginx -d comments.hallada.net
```
Once that successfully completed, I added a new nginx server block for the https
version at `/etc/nginx/sites-available/isso-https`:
```nginx
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name comments.hallada.net;
root /var/www/html;
ssl_certificate /etc/letsencrypt/live/comments.hallada.net/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/comments.hallada.net/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/comments.hallada.net/fullchain.pem;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Script-Name /isso;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:8000;
}
location /isso {
try_files $uri $uri/ $uri.php?$args =404;
}
}
```
And, I changed the old http server block so that it just permanently redirects
to the https version:
```nginx
server {
listen 80;
listen [::]:80;
server_name comments.hallada.net;
root /var/www/html;
location / {
return 301 https://comments.hallada.net$request_uri;
}
}
```
Then I enabled the https version:
```bash
sudo ln -s /etc/nginx/sites-available/isso-https /etc/nginx/sites-enabled/isso-https
sudo systemctl restart nginx
```
I checked that I didn't get any errors visiting `https://comments.hallada.net/`,
and then changed my Jekyll include snippet so that it pointed at the `https`
site instead of `http`.
Now you can securely leave a comment if you want to yell at me for writing the
wrong thing!
## EDIT 5/28/2019:
I don't recommend using the Debian package anymore since it frequently goes out
of date and breaks when upgrading your Linux distribution.
Instead, follow the [Isso docs](https://posativ.org/isso/docs/install/) by
creating a [virtualenv](https://virtualenv.pypa.io/en/latest/) and then run `pip
install isso` and `pip install gunicorn` from within the virtualenv. Then, when
creating [a systemd
service](https://github.com/jgraichen/debian-isso/blob/master/debian/isso.service),
make sure to point to the gunicorn executable in that virtualenv (e.g.
`/opt/isso/bin/gunicorn`). It should load and run Isso from the same virtualenv.

View File

@ -0,0 +1,365 @@
---
title: "Studio-Frontend: Developing Frontend Separate from edX Platform"
layout: post
---
*This is a blog post that I originally wrote for the [edX engineering
blog](https://engineering.edx.org/).*
At the core of edX is the [edx-platform](https://github.com/edx/edx-platform), a
monolithic Django code-base 2.7 times the size of Django itself.
<!--excerpt-->
```
-------------------------------------------------------------------------------
Language Files Lines Code Comments Blanks
-------------------------------------------------------------------------------
ActionScript 1 118 74 23 21
Autoconf 10 425 237 163 25
CSS 55 17106 14636 1104 1366
HTML 668 72567 36865 30306 5396
JavaScript 1500 463147 352306 55882 54959
JSON 91 14583 14583 0 0
JSX 33 2595 2209 62 324
LESS 1 949 606 232 111
Makefile 1 65 49 8 8
Markdown 23 287 287 0 0
Mustache 1 1 1 0 0
Python 3277 559255 442756 29254 87245
ReStructuredText 48 4252 4252 0 0
Sass 424 75559 55569 4555 15435
Shell 15 929 505 292 132
SQL 4 6283 5081 1186 16
Plain Text 148 3521 3521 0 0
TypeScript 20 88506 76800 11381 325
XML 364 5283 4757 231 295
YAML 36 1630 1361 119 150
-------------------------------------------------------------------------------
Total 6720 1317061 1016455 134798 165808
-------------------------------------------------------------------------------
```
35% of the edx-platform is JavaScript. While it has served edX well since its
inception in 2012, reaching over 11 million learners in thousands of courses on
[edX.org](https://www.edx.org/) and many more millions on all of the [Open edX
instances across the
world](https://openedx.atlassian.net/wiki/spaces/COMM/pages/162245773/Sites+powered+by+Open+edX),
it is starting to show its age. Most of it comes in the form of [Backbone.js
apps](http://backbonejs.org/) loaded by [RequireJS](http://requirejs.org/) in
Django [Mako templates](http://www.makotemplates.org/), with
[jQuery](https://jquery.com/) peppered throughout.
Many valiant efforts are underway to modernize the frontend of edx-platform
including replacing RequireJS with Webpack, Backbone.js with
[React](https://reactjs.org/), and ES5 JavaScript and CoffeeScript with ES6
JavaScript. Many of these efforts [were covered in detail at the last Open edX
conference](https://www.youtube.com/watch?v=xicBnbDX4AY) and in [Open edX
Proposal 11: Front End Technology
Standards](https://open-edx-proposals.readthedocs.io/en/latest/oep-0011-bp-FED-technology.html).
However, the size and complexity of the edx-platform means that these kind of
efforts are hard to prioritize, and, in the meantime, frontend developers are
forced to [wait over 10
minutes](https://openedx.atlassian.net/wiki/spaces/FEDX/pages/264700138/Asset+Compilation+Audit+2017-11-01)
for our home-grown asset pipeline to build before they can view changes.
There have also been efforts to incrementally modularize and extract parts of
the edx-platform into separate python packages that could be installed as
[Django apps](https://docs.djangoproject.com/en/2.0/ref/applications/), or even
as separately deployed
[microservices](https://en.wikipedia.org/wiki/Microservices). This allows
developers to work independently from the rest of the organization inside of a
repository that they own, manage, and is small enough that they could feasibly
understand it entirely.
When my team was tasked with improving the user experience of pages in
[Studio](https://studio.edx.org/), the tool that course authors use to create
course content, we opted to take a similar architectural approach with the
frontend and create a new repository where we could develop new pages in
isolation and then integrate them back into the edx-platform as a plugin. We
named this new independent repository
[studio-frontend](https://github.com/edx/studio-frontend). With this approach,
our team owns the entire studio-frontend code-base and can make the best
architectural changes required for its features without having to consult with
and contend with all of the other teams at edX that contribute to the
edx-platform. Developers of studio-frontend can also avoid the platforms slow
asset pipeline by doing all development within the studio-frontend repository
and then later integrating the changes into platform.
## React and Paragon
When edX recently started to conform our platform to the [Web Content
Accessibility Guidelines 2.0 AA (WCAG 2.0
AA)](https://www.w3.org/WAI/intro/wcag), we faced many challenges in
retrofitting our existing frontend code to be accessible. Rebuilding Studio
pages from scratch in studio-frontend allows us to not only follow the latest
industry standards for building robust and performant frontend applications, but
to also build with accessibility in mind from the beginning.
The Javascript community has made great strides recently to [address
accessibility issues in modern web
apps](https://reactjs.org/docs/accessibility.html). However, we had trouble
finding an open-source React component library that fully conformed to WCAG 2.0
AA and met all of edXs needs, so we decided to build our own:
[Paragon](https://github.com/edx/paragon).
Paragon is a library of building-block components like buttons, inputs, icons,
and tables which were built from scratch in React to be accessible. The
components are styled using the [Open edX theme of Bootstrap
v4](https://github.com/edx/edx-bootstrap) (edXs decision to adopt Bootstrap is
covered in
[OEP-16](https://open-edx-proposals.readthedocs.io/en/latest/oep-0016-bp-adopt-bootstrap.html)).
Users of Paragon may also choose to use the
[themeable](https://github.com/edx/paragon#export-targets) unstyled target and
provide their own Bootstrap theme.
![Paragon's modal component displayed in
Storybook](/img/blog/paragon-modal-storybook.jpg)
Studio-frontend composes together Paragon components into higher-level
components like [an accessibility
form](https://github.com/edx/studio-frontend/blob/master/src/accessibilityIndex.jsx)
or [a table for course assets with searching, filtering, sorting, pagination,
and upload](https://github.com/edx/studio-frontend/blob/master/src/index.jsx).
While we developed these components in studio-frontend, we were able to improve
the base Paragon components. Other teams at edX using the same components were
able to receive the same improvements with a single package update.
![Screenshot of the studio-frontend assets table inside of
Studio](/img/blog/studio-frontend-assets-table.jpg)
## Integration with Studio
We were able to follow the typical best practices for developing a React/Redux
application inside studio-frontend, but at the end of the day, we still had to
somehow get our components inside of existing Studio pages and this is where
most of the challenges arose.
## Webpack
The aforementioned move from RequireJS to Webpack in the edx-platform made it
possible for us to build our studio-frontend components from source with Webpack
within edx-platform. However, this approach tied us to the edx-platforms slow
asset pipeline. If we wanted rapid development, we had to duplicate the
necessary Webpack config between both studio-frontend and edx-platform.
Instead, studio-frontend handles building the development and production Webpack
builds itself. In development mode, the incremental rebuild that happens
automatically when a file is changed takes under a second. The production
JavaScript and CSS bundles, which take about 25 seconds to build, are published
with every new release to
[NPM](https://www.npmjs.com/package/@edx%2Fstudio-frontend). The edx-platform
`npm install`s studio-frontend and then copies the built production files from
`node_modules` into its Django static files directory where the rest of the
asset pipeline will pick it up.
To actually use the built JavaScript and CSS, edx-platform still needs to
include it in its Mako templates. We made a [Mako template
tag](https://github.com/edx/edx-platform/blob/master/common/djangoapps/pipeline_mako/templates/static_content.html#L93-L122)
that takes a Webpack entry point name in studio-frontend and generates script
tags that include the necessary files from the studio-frontend package. It also
dumps all of the initial context that studio-frontend needs from the
edx-platform Django app into [a JSON
object](https://github.com/edx/edx-platform/blob/master/cms/templates/asset_index.html#L36-L56)
in a script tag on the page that studio-frontend components can access via a
shared id. This is how studio-frontend components get initial data from Studio,
like which course its embedded in.
For performance, modules that are shared across all studio-frontend components
are extracted into `common.min.js` and `common.min.css` files that are included
on every Studio template that has a studio-frontend component. User's browsers
should cache these files so that they do not have to re-download libraries like
React and Redux every time they visit a new page that contains a studio-frontend
component.
## CSS Isolation
Since the move to Bootstrap had not yet reached the Studio part of the
edx-platform, most of the styling clashed with the Bootstrap CSS that
studio-frontend components introduced. And, the Bootstrap styles were also
leaking outside of the studio-frontend embedded component `div` and affecting
the rest of the Studio page around it.
![Diagram of a studio-frontend component embedded inside of
Studio](/img/blog/studio-frontend-style-isolation.jpg)
We were able to prevent styles leaking outside of the studio-frontend component
by scoping all CSS to only the `div` that wraps the component. Thanks to the
Webpack [postcss-loader](https://github.com/postcss/postcss-loader) and the
[postcss-prepend-selector](https://github.com/ledniy/postcss-prepend-selector)
we were able to automatically scope all of our CSS selectors to that `div` in
our build process.
Preventing the Studio styles from affecting our studio-frontend component was a
much harder problem because it means avoiding the inherently cascading nature of
CSS. A common solution to this issue is to place the 3rd-party component inside
of an
[`iframe`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/iframe)
element, which essentially creates a completely separate sub-page where both CSS
and JavaScript are isolated from the containing page. Because `iframe`s
introduce many other performance and styling issues, we wanted to find a
different solution to isolating CSS.
The CSS style [`all:
initial`](https://developer.mozilla.org/en-US/docs/Web/CSS/all) allows
resetting all properties on an element to their initial values as defined in the
CSS spec. Placing this style under a wildcard selector in studio-frontend
allowed us to reset all inherited props from the legacy Studio styles without
having to enumerate them all by hand.
```css
* {
all: initial;
}
```
While this CSS property doesnt have broad browser support yet, we were able to
polyfill it thanks to postcss with the
[postcss-initial](https://github.com/maximkoretskiy/postcss-initial) plugin.
However, this resets the styles to *nothing*. For example, all `div`s are
displayed inline. To return the styles back to to some sane browser default we
had to re-apply a browser default stylesheet. You can read more about this
technique at
[default-stylesheet](https://github.com/thallada/default-stylesheet).
From there, Bootstraps
[reboot](https://getbootstrap.com/docs/4.0/content/reboot/) normalizes the
browser-specific styling to a common baseline and then applies the Bootstrap
styles conflict-free from the surrounding CSS cascade.
There's a candidate recommendation in CSS for a [`contains`
property](https://www.w3.org/TR/css-contain-1/), which will "allow strong,
predictable isolation of a subtree from the rest of the page". I hope that it
will provide a much more elegant solution to this problem once browsers support
it.
## Internationalization
Another major challenge with separating out the frontend from edx-platform was
that most of our internationalization tooling was instrumented inside the
edx-platform. So, in order to display text in studio-frontend components in the
correct language we either had to pass already-translated strings from the
edx-platform into studio-frontend, or set-up translations inside
studio-frontend.
We opted for the latter because it kept the content close to the code that used
it. Every display string in a component is stored in a
[displayMessages.jsx](https://github.com/edx/studio-frontend/blob/master/src/components/AssetsTable/displayMessages.jsx)
file and then imported and referenced by an id within the component. A periodic
job extracts these strings from the project, pushes them up to our translations
service [Transifex](https://www.transifex.com/), and pulls any new translations
to store them in our NPM package.
Because Transifexs `KEYVALUEJSON` file format does not allow for including
comments in the strings for translation, [Eric](https://github.com/efischer19)
created a library called [reactifex](https://github.com/efischer19/reactifex)
that will send the comments in separate API calls.
Studio includes the users language in the context that it sends a
studio-frontend component for initialization. Using this, the component can
display the message for that language if it exists. If it does not, then it will
display the original message in English and [wrap it in a `span` with `lang="en"`
as an
attribute](https://github.com/edx/studio-frontend/blob/master/src/utils/i18n/formattedMessageWrapper.jsx)
so that screen-readers know to read it in English even if their default is some
other language.
Read more about studio-frontends internationalization process in [the
documentation that Eric
wrote](https://github.com/edx/studio-frontend/blob/master/src/data/i18n/README.md).
## Developing with Docker
To normalize the development environment across the whole studio-frontend team,
development is done in a Docker container. This is a minimal Ubuntu 16.04
container with specific version of Node 8 installed and its only purpose is to
run Webpack. This follows the pattern established in [OEP-5: Pre-built
Development
Environments](https://open-edx-proposals.readthedocs.io/en/latest/oep-0005-arch-containerize-devstack.html)
for running a single Docker container per process that developers can easily
start without installing dependencies.
Similar to edXs [devstack](https://github.com/edx/devstack) there is a Makefile
with commands to start and stop the docker container. The docker container then
immediately runs [`npm run
start`](https://github.com/edx/studio-frontend/blob/master/package.json#L12),
which runs Webpack with the
[webpack-dev-server](https://github.com/webpack/webpack-dev-server). The
webpack-dev-server is a node server that serves assets built by Webpack.
[Studio-frontend's Webpack
config](https://github.com/edx/studio-frontend/blob/master/config/webpack.dev.config.js#L94)
makes this server available to the developer's host machine
at `http://localhost:18011`.
With [hot-reload](https://webpack.js.org/concepts/hot-module-replacement/)
enabled, developers can now visit that URL in their browser, edit source files
in studio-frontend, and then see changes reflected instantly in their browser
once Webpack finishes its incremental rebuild.
However, many studio-frontend components need to be able to talk to the
edx-platform Studio backend Django server. Using [dockers network connect
feature](https://docs.docker.com/compose/networking/#use-a-pre-existing-network)
the studio-frontend container can join the developers existing docker devstack
network so that the studio-frontend container can make requests to the docker
devstack Studio container at `http://edx.devstack.studio:18010/` and Studio can
access studio-frontend at `http://dahlia.studio-fronend:18011/`.
The webpack-dev-server can now [proxy all
requests](https://github.com/edx/studio-frontend/blob/master/config/webpack.dev.config.js#L101)
to Studio API endpoints (like `http://localhost:18011/assets`)
to `http://edx.devstack.studio:18010/`.
## Developing within Docker Devstack Studio
Since studio-frontend components will be embedded inside of an existing Studio
page shell, its often useful to develop on studio-frontend containers inside of
this set-up. [This can be
done](https://github.com/edx/studio-frontend#development-inside-devstack-studio)
by setting a variable in the devstack's `cms/envs/private.py`:
```python
STUDIO_FRONTEND_CONTAINER_URL = 'http://localhost:18011'
```
This setting is checked in the Studio Mako templates wherever studio-frontend
components are embedded. If it is set to a value other than `None`, then the
templates will request assets from that URL instead of the Studio's own static
assets directory. When a developer loads a Studio page with an embedded
studio-frontend component, their studio-frontend webpack-dev-server will be
requested at that URL. Similarly to developing on studio-frontend in isolation,
edits to source files will trigger a Webpack compilation and the Studio page
will be hot-reloaded or reloaded to reflect the changes automatically.
Since the studio-frontend JS loaded on `localhost:18010` is now requesting the
webpack-dev-server on `localhost:18011`,
an [`Access-Control-Allow-Origin` header](https://github.com/edx/studio-frontend/blob/master/config/webpack.dev.config.js#L98)
has to be configured on the webpack-dev-server to get around CORS violations.
![Diagram of studio-frontend's docker container communicating to Studio inside
of the devstack_default docker
network](/img/blog/studio-frontend-docker-devstack.jpg)
## Deploying to Production
[Each release of
studio-frontend](https://github.com/edx/studio-frontend#releases) will upload
the `/dist` files built by Webpack in production mode to
[NPM](https://www.npmjs.com/package/@edx/studio-frontend). edx-platform
requires a particular version of studio-frontend in its
[`package.json`](https://github.com/edx/edx-platform/blob/master/package.json#L7).
When a new release of edx-platform is made, `paver update_assets` will run
which will copy all of the files in the
`node_modules/@edx/studio-frontend/dist/` to the Studio static folder.
Because `STUDIO_FRONTEND_CONTAINER_URL` will be `None` in production, it will be
ignored, and Studio pages will request studio-frontend assets from that static
folder.
## Future
Instead of “bringing the new into the old”, wed eventually like to move to a
model where we “work in the new and bring in the old if necessary”. We could
host studio-frontend statically on a completely separate server which talks to
Studio via a REST (or [GraphQL](https://graphql.org/)) API. This approach would
eliminate the complexity around CSS isolation and bring big performance wins for
our users, but it would require us to rewrite more of Studio.

View File

@ -0,0 +1,758 @@
---
title: "Generating icosahedrons and hexspheres in Rust"
layout: post
image: /img/blog/hexsphere_colored_7.png
---
I've been trying to learn [Rust](https://www.rust-lang.org/) lately, the hot new
systems programming language. One of the projects I wanted to tackle with the
speed of Rust was generating 3D polyhedron shapes. Specifically, I wanted to
implement something like the [Three.js
`IcosahedronGeometry`](https://threejs.org/docs/#api/en/geometries/IcosahedronGeometry)
in Rust. If you try to generate
[icosahedron](https://en.wikipedia.org/wiki/Icosahedron)s in Three.js over any
detail level over 5 the whole browser will slow to a crawl. I think we can do
better in Rust!
Furthermore, I wanted to generate a hexsphere: a sphere composed of hexagon
faces and 12 pentagon faces, otherwise known as a truncated icosahedron or the
[Goldberg polyhedron](https://en.wikipedia.org/wiki/Goldberg_polyhedron). The
shape would be ideal for a game since (almost) every tile would have the same
area and six sides to defend or attack from. There's a few [Javascript projects
for generating hexspheres](https://www.robscanlon.com/hexasphere/). Most of them
generate the shape by starting with a subdivided icosahedron and then truncating
the sides into hexagons. Though, there [exist other methods for generating the
hexsphere
shape](https://stackoverflow.com/questions/46777626/mathematically-producing-sphere-shaped-hexagonal-grid).
**Play around with all of these shapes in your browser at:
[https://www.hallada.net/planet/](https://www.hallada.net/planet/).**
So, how would we go about generating a hexsphere from scratch?
<!--excerpt-->
### The Icosahedron Seed
To start our sculpture, we need our ball of clay. The most basic shape that we
start with can be defined by its 20 triangle faces and 12 vertices: the regular
icosahedron. If you've ever played Dungeons and Dragons, this is the 20-sided
die.
To define this basic shape in Rust, we first need to define a few structs. The
most basic unit we need is a 3D vector which describes a single point in 3D
space with a X, Y, and Z float values. I could have defined this myself, but to
avoid having to implement a bunch of vector operations (like add, subtract,
multiply, etc.) I chose to import
[`Vector3`](https://docs.rs/cgmath/0.17.0/cgmath/struct.Vector3.html) from the
[cgmath crate](https://crates.io/crates/cgmath).
The next struct we need is `Triangle`. This will define a face between three
vertices:
```rust
#[derive(Debug)]
pub struct Triangle {
pub a: usize,
pub b: usize,
pub c: usize,
}
impl Triangle {
fn new(a: usize, b: usize, c: usize) -> Triangle {
Triangle { a, b, c }
}
}
```
We use `usize` for the three points of the triangle because they are indices
into a [`Vec`](https://doc.rust-lang.org/std/vec/struct.Vec.html) of `Vector3`s.
To keep these all together, I'll define a `Polyhedron` struct:
```rust
#[derive(Debug)]
pub struct Polyhedron {
pub positions: Vec<Vector3>,
pub cells: Vec<Triangle>,
}
```
With this, we can define the regular icosahedron:
```rust
impl Polyhedron {
pub fn regular_isocahedron() -> Polyhedron {
let t = (1.0 + (5.0 as f32).sqrt()) / 2.0;
Polyhedron {
positions: vec![
Vector3::new(-1.0, t, 0.0),
Vector3::new(1.0, t, 0.0),
Vector3::new(-1.0, -t, 0.0),
Vector3::new(1.0, -t, 0.0),
Vector3::new(0.0, -1.0, t),
Vector3::new(0.0, 1.0, t),
Vector3::new(0.0, -1.0, -t),
Vector3::new(0.0, 1.0, -t),
Vector3::new(t, 0.0, -1.0),
Vector3::new(t, 0.0, 1.0),
Vector3::new(-t, 0.0, -1.0),
Vector3::new(-t, 0.0, 1.0),
],
cells: vec![
Triangle::new(0, 11, 5),
Triangle::new(0, 5, 1),
Triangle::new(0, 1, 7),
Triangle::new(0, 7, 10),
Triangle::new(0, 10, 11),
Triangle::new(1, 5, 9),
Triangle::new(5, 11, 4),
Triangle::new(11, 10, 2),
Triangle::new(10, 7, 6),
Triangle::new(7, 1, 8),
Triangle::new(3, 9, 4),
Triangle::new(3, 4, 2),
Triangle::new(3, 2, 6),
Triangle::new(3, 6, 8),
Triangle::new(3, 8, 9),
Triangle::new(4, 9, 5),
Triangle::new(2, 4, 11),
Triangle::new(6, 2, 10),
Triangle::new(8, 6, 7),
Triangle::new(9, 8, 1),
],
}
}
}
```
### JSON Serialization
To prove this works, we need to be able to output our shape to some format that
will be able to be rendered. Coming from a JS background, I'm only familiar with
rendering shapes with WebGL. So, I need to be able to serialize the shape to
JSON so I can load it in JS.
There's an amazing library in Rust called
[serde](https://crates.io/crates/serde) that will make this very
straightforward. We just need to import it and `impl Serialize` for all of our
structs.
The JSON structure we want will look like this. This is what Three.js expects
when initializing
[`BufferGeometry`](https://threejs.org/docs/#api/en/core/BufferGeometry).
```json
{
"positions": [
[
-0.8506508,
0,
0.5257311
],
...
],
"cells": [
[
0,
1,
2,
],
...
],
}
```
For the `"cells"` array, we'll need to serialize `Triangle` into an array of 3
integer arrays:
```rust
impl Serialize for Triangle {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let vec_indices = vec![self.a, self.b, self.c];
let mut seq = serializer.serialize_seq(Some(vec_indices.len()))?;
for index in vec_indices {
seq.serialize_element(&index)?;
}
seq.end()
}
}
```
I had some trouble serializing the `cgmath::Vector3` to an array, so I made my
own type that wrapped `Vector3` that could be serialized to an array of 3
floats.
```rust
#[derive(Debug)]
pub struct ArraySerializedVector(pub Vector3<f32>);
impl Serialize for ArraySerializedVector {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
let values = vec![self.0.x, self.0.y, self.0.z];
let mut seq = serializer.serialize_seq(Some(values.len()))?;
for value in values {
seq.serialize_element(&value)?;
}
seq.end()
}
}
```
And now `Polyhedron` needs to use this new type and implement `Serialize` for
the whole shape to get serialized:
```rust
#[derive(Serialize, Debug)]
pub struct Polyhedron {
pub positions: Vec<ArraySerializedVector>,
pub cells: Vec<Triangle>,
}
```
The actual serialization is done with:
```rust
fn write_to_json_file(polyhedron: Polyhedron, path: &Path) {
let mut json_file = File::create(path).expect("Can't create file");
let json = serde_json::to_string(&polyhedron).expect("Problem serializing");
json_file
.write_all(json.as_bytes())
.expect("Can't write to file");
}
```
On the JS side, the `.json` file can be read and simply fed into either Three.js
or [regl](https://github.com/regl-project/reg) to be rendered in WebGL ([more on
that later](#rendering-in-webgl-with-regl)).
![Regular Icosahedron](/img/blog/icosahedron_colored_1.png)
## Subdivided Icosahedron
Now, we need to take our regular icosahedron and subdivide its faces N number of
times to generate an icosahedron with a detail level of N.
I pretty much copied must of [the subdividing code from
Three.js](https://github.com/mrdoob/three.js/blob/34dc2478c684066257e4e39351731a93c6107ef5/src/geometries/PolyhedronGeometry.js#L90)
directly into Rust.
I won't bore you with the details here, you can find the function
[here](https://github.com/thallada/icosahedron/blob/9643757df245e29f5ecfbb25f9a2c06b3a4e1217/src/lib.rs#L160-L205).
![Subdivided Icosahedron](/img/blog/icosahedron_colored_3.png)
### Truncated Icosahedron
Now we get to the meat of this project. Transforming an icosahedron into a
hexsphere by
[truncating](https://en.wikipedia.org/wiki/Truncation_%28geometry%29) the points
of the icosahedron into hexagon and pentagon faces.
You can imagine this operation as literally cutting off the points of the
subdivided icosahedron at exactly the midpoint between the point and it's six or
five neighboring points.
![Image of biggest dodecahedron inside
icosahedron](/img/blog/dodecahedron_in_icosahedron.png)
([image source](http://www.oz.nthu.edu.tw/~u9662122/DualityProperty.html))
In this image you can see the regular icosahedron (0 subdivisions) in wireframe
with a yellow shape underneath which is the result of all 12 points truncated to
12 pentagon faces, in other words: the [regular
dodecahedron](https://en.wikipedia.org/wiki/Dodecahedron).
You can see that the points of the new pentagon faces will be the exact center
of the original triangular faces. It should now make sense why truncating a
shape with 20 faces of 3 edges each results in a shape with 12 faces of 5 edges
each. Each pair multiplied still equals 60.
#### Algorithm
There are many different algorithms you could use to generate the truncated
shape, but this is roughly what I came up with:
1. Store a map of every icosahedron vertex to faces composed from that vertex
(`vert_to_faces`).
2. Calculate and cache the [centroid](https://en.wikipedia.org/wiki/Centroid) of
every triangle in the icosahedron (`triangle_centroids`).
3. For every vertex in the original icosahedron:
4. Find the center point between all of centroids of all of the faces for that
vertex (`center_point`). This is essentially the original icosahedron point
but lowered towards the center of the polygon since it will eventually be the
center of a new flat hexagon face.
![hexagon center point in red with original icosahedron faces fanning
out](/img/blog/hexagon_fan.png)
5. For every triangle face composed from the original vertex:
![hexagon fan with selected triangle face in
blue](/img/blog/hexagon_fan_triangle_selected.png)
6. Sort the vertices of the triangle face so there is a vertex `A` in the center
of the fan like in the image, and two other vertices `B` and `C` at the edges
of the hexagon.
7. Find the centroid of the selected face. This will be one of the five or six
points of the new pentagon or hexagon (in brown in diagram below:
`triangleCentroid`).
8. Find the mid point between `AB` and `AC` (points `midAB` and `midAC` in
diagram).
9. With these mid points and the face centroid, we now have two new triangles
(in orange below) that form one-fifth or one-sixth of the final pentagon or
hexagon face. Add the points of the triangle to the `positions` array. Add
the two new triangles composed from those vertices as indexes into the
`positions` array to the `cells` array. We need to compose the pentagon or
hexagon out of triangles because in graphics everything is a triangle, and
this is the simplest way to tile either shape with triangles:
![hexagon fan ](/img/blog/hexagon_fan_construct.png)
10. Go to step 5 until all faces of the icosahedron vertex have been visited.
Save indices to all new triangles in the `cells` array, which now form a
complete pentagon or hexagon face, to the `faces` array.
![hexagons tiling on icosahedron faces](/img/blog/hexagon_tiling.png)
11. Go to step 3 until all vertices in the icosahedron have been visited. The
truncated icosahedron is now complete.
![colored hexsphere of detail level 3](/img/blog/hexsphere_colored_3.png)
#### Code
The `truncate` function calls out to a bunch of other functions, so [here's a
link to the function within the context of the whole
file](https://github.com/thallada/icosahedron/blob/9643757df245e29f5ecfbb25f9a2c06b3a4e1217/src/lib.rs#L227).
### Calculating Normals
It took me a surprisingly long time to figure out how to compute
[normals](https://en.wikipedia.org/wiki/Normal_(geometry)) for the truncated
icosahedron. I tried just using an out-of-the-box solution like
[angle-normals](https://github.com/mikolalysenko/angle-normals/blob/master/angle-normals.js)
which could supposedly calculate the normal vectors for you, but they came out
all wrong.
![hexsphere with bad normals](/img/blog/bad_hexsphere_normals.png)
So, I tried doing it myself. Most tutorials on computing normal vectors for a
mesh assume that it is tiled in a particular way. But, my algorithm spins around
icosahedron points in all different directions, and so the triangle points are
not uniformly in clockwise or counter-clockwise order.
I could have sorted these points into the correct order, but I found it easier
to instead just detect when the normal was pointing the wrong way and just
invert it.
```rust
pub fn compute_triangle_normals(&mut self) {
let origin = Vector3::new(0.0, 0.0, 0.0);
for i in 0..self.cells.len() {
let vertex_a = &self.positions[self.cells[i].a].0;
let vertex_b = &self.positions[self.cells[i].b].0;
let vertex_c = &self.positions[self.cells[i].c].0;
let e1 = vertex_a - vertex_b;
let e2 = vertex_c - vertex_b;
let mut no = e1.cross(e2);
// detect and correct inverted normal
let dist = vertex_b - origin;
if no.dot(dist) < 0.0 {
no *= -1.0;
}
let normal_a = self.normals[self.cells[i].a].0 + no;
let normal_b = self.normals[self.cells[i].b].0 + no;
let normal_c = self.normals[self.cells[i].c].0 + no;
self.normals[self.cells[i].a] = ArraySerializedVector(normal_a);
self.normals[self.cells[i].b] = ArraySerializedVector(normal_b);
self.normals[self.cells[i].c] = ArraySerializedVector(normal_c);
}
for normal in self.normals.iter_mut() {
*normal = ArraySerializedVector(normal.0.normalize());
}
}
```
### Assigning Random Face Colors
Finally, all that's left to generate is the face colors. The only way I could
figure out how to individually color a shape's faces in WebGL was to pass a
color per vertex. The issue with this is that each vertex of the generated
shapes could be shared between many different faces.
How can we solve this? At the cost of memory, we can just duplicate a vertex
every time it's used by a different triangle. That way no vertex is shared.
This can be done after a shape has been generated with shared vertices.
```rust
pub fn unique_vertices(&mut self, other: Polyhedron) {
for triangle in other.cells {
let vertex_a = other.positions[triangle.a].0;
let vertex_b = other.positions[triangle.b].0;
let vertex_c = other.positions[triangle.c].0;
let normal_a = other.normals[triangle.a].0;
let normal_b = other.normals[triangle.b].0;
let normal_c = other.normals[triangle.c].0;
self.positions.push(ArraySerializedVector(vertex_a));
self.positions.push(ArraySerializedVector(vertex_b));
self.positions.push(ArraySerializedVector(vertex_c));
self.normals.push(ArraySerializedVector(normal_a));
self.normals.push(ArraySerializedVector(normal_b));
self.normals.push(ArraySerializedVector(normal_c));
self.colors
.push(ArraySerializedVector(Vector3::new(1.0, 1.0, 1.0)));
self.colors
.push(ArraySerializedVector(Vector3::new(1.0, 1.0, 1.0)));
self.colors
.push(ArraySerializedVector(Vector3::new(1.0, 1.0, 1.0)));
let added_index = self.positions.len() - 1;
self.cells
.push(Triangle::new(added_index - 2, added_index - 1, added_index));
}
self.faces = other.faces;
}
```
With unique vertices, we can now generate a random color per face with the [rand
crate](https://crates.io/crates/rand).
```rust
pub fn assign_random_face_colors(&mut self) {
let mut rng = rand::thread_rng();
for i in 0..self.faces.len() {
let face_color = Vector3::new(rng.gen(), rng.gen(), rng.gen());
for c in 0..self.faces[i].len() {
let face_cell = &self.cells[self.faces[i][c]];
self.colors[face_cell.a] = ArraySerializedVector(face_color);
self.colors[face_cell.b] = ArraySerializedVector(face_color);
self.colors[face_cell.c] = ArraySerializedVector(face_color);
}
}
}
```
### Binary Serialization
Now that we have to duplicate vertices for individual face colors, the size of
our JSON outputs are getting quite big:
| File | Size |
|---|---|
| icosahedron_r1_d6.json | 28 MB |
| icosahedron_r1_d7.json | 113 MB |
| hexsphere_r1_d5.json | 42 MB |
| hexsphere_r1_d6.json | 169 MB |
Since all of our data is just floating point numbers, we could reduce the size
of the output considerably by using a binary format instead.
I used the [byteorder](https://docs.rs/byteorder/1.3.2/byteorder/) crate to
write out all of the `Vec`s in my `Polyhedron` struct to a binary file in
little-endian order.
The binary format is laid out as:
1. 1 32 bit unsigned integer specifying the number of vertices (`V`)
2. 1 32 bit unsigned integer specifying the number of triangles (`T`)
3. `V` * 3 number of 32 bit floats for every vertex's x, y, and z coordinate
4. `V` * 3 number of 32 bit floats for the normals of every vertex
5. `V` * 3 number of 32 bit floats for the color of every vertex
6. `T` * 3 number of 32 bit unsigned integers for the 3 indices into the vertex
array that make every triangle
The `write_to_binary_file` function which does all that is
[here](https://github.com/thallada/icosahedron/blob/9643757df245e29f5ecfbb25f9a2c06b3a4e1217/src/bin.rs#L13).
That's a lot better:
| File | Size |
|---|---|
| icosahedron_r1_d6.bin | 9.8 MB |
| icosahedron_r1_d7.bin | 11 MB |
| hexsphere_r1_d5.bin | 14 MB |
| hexsphere_r1_d6.bin | 58 MB |
On the JavaScript side, the binary files can be read into `Float32Array`s like
this:
```javascript
fetch(binaryFile)
.then(response => response.arrayBuffer())
.then(buffer => {
let reader = new DataView(buffer);
let numVertices = reader.getUint32(0, true);
let numCells = reader.getUint32(4, true);
let shape = {
positions: new Float32Array(buffer, 8, numVertices * 3),
normals: new Float32Array(buffer, numVertices * 12 + 8, numVertices * 3),
colors: new Float32Array(buffer, numVertices * 24 + 8, numVertices * 3),
cells: new Uint32Array(buffer, numVertices * 36 + 8, numCells * 3),
})
```
### Rendering in WebGL with Regl
I was initially rendering the shapes with Three.js but switched to
[regl](https://github.com/regl-project/regl) because it seemed like a more
direct abstraction over WebGL. It makes setting up a WebGL renderer incredibly
easy compared to all of the dozens cryptic function calls you'd have to
otherwise use.
This is pretty much all of the rendering code using regl in my [3D hexsphere and
icosahedron viewer project](https://github.com/thallada/planet).
```javascript
const drawShape = hexsphere => regl({
vert: `
precision mediump float;
uniform mat4 projection, view;
attribute vec3 position, normal, color;
varying vec3 fragNormal, fragPosition, fragColor;
void main() {
fragNormal = normal;
fragPosition = position;
fragColor = color;
gl_Position = projection * view * vec4(position, 1.0);
}`,
frag: `
precision mediump float;
struct Light {
vec3 color;
vec3 position;
};
uniform Light lights[1];
varying vec3 fragNormal, fragPosition, fragColor;
void main() {
vec3 normal = normalize(fragNormal);
vec3 light = vec3(0.1, 0.1, 0.1);
for (int i = 0; i < 1; i++) {
vec3 lightDir = normalize(lights[i].position - fragPosition);
float diffuse = max(0.0, dot(lightDir, normal));
light += diffuse * lights[i].color;
}
gl_FragColor = vec4(fragColor * light, 1.0);
}`,
attributes: {
position: hexsphere.positions,
normal: hexsphere.normals,
color: hexsphere.colors,
},
elements: hexsphere.cells,
uniforms: {
"lights[0].color": [1, 1, 1],
"lights[0].position": ({ tick }) => {
const t = 0.008 * tick
return [
1000 * Math.cos(t),
1000 * Math.sin(t),
1000 * Math.sin(t)
]
},
},
})
```
I also imported [regl-camera](https://github.com/regl-project/regl-camera) which
handled all of the complex viewport code for me.
It was fairly easy to get a simple renderer working quickly in regl, but I
couldn't find many examples of more complex projects using regl. Unfortunately,
the project looks a bit unmaintained these days as well. If I'm going to
continue with rendering in WebGL, I think I will try out
[Babylon.js](https://www.babylonjs.com/) instead.
### Running in WebAssembly
Since rust can be compiled down to wasm and then run in the browser, I briefly
tried getting the project to run completely in the browser.
The [wasm-pack](https://github.com/rustwasm/wasm-pack) tool made it pretty easy
to get started. My main struggle was figuring out an efficient way to get the
megabytes of generated shape data into the JavaScript context so it could be
rendered in WebGL.
The best I could come up with was to export all of my structs into flat
`Vec<f32>`s and then create `Float32Array`s from the JS side that are views into
wasm's memory.
To export:
```rust
pub fn fill_exports(&mut self) {
for position in &self.positions {
self.export_positions.push(position.0.x);
self.export_positions.push(position.0.y);
self.export_positions.push(position.0.z);
}
for normal in &self.normals {
self.export_normals.push(normal.0.x);
self.export_normals.push(normal.0.y);
self.export_normals.push(normal.0.z);
}
for color in &self.colors {
self.export_colors.push(color.0.x);
self.export_colors.push(color.0.y);
self.export_colors.push(color.0.z);
}
for cell in &self.cells {
self.export_cells.push(cell.a as u32);
self.export_cells.push(cell.b as u32);
self.export_cells.push(cell.c as u32);
}
}
```
And then the wasm `lib.rs`:
```rust
use byteorder::{LittleEndian, WriteBytesExt};
use js_sys::{Array, Float32Array, Uint32Array};
use wasm_bindgen::prelude::*;
use web_sys::console;
mod icosahedron;
#[cfg(feature = "wee_alloc")]
#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;
#[wasm_bindgen(start)]
pub fn main_js() -> Result<(), JsValue> {
#[cfg(debug_assertions)]
console_error_panic_hook::set_once();
Ok(())
}
#[wasm_bindgen]
pub struct Hexsphere {
positions: Float32Array,
normals: Float32Array,
colors: Float32Array,
cells: Uint32Array,
}
#[wasm_bindgen]
pub fn shape_data() -> Result<Array, JsValue> {
let radius = 1.0;
let detail = 7;
let mut hexsphere = icosahedron::Polyhedron::new_truncated_isocahedron(radius, detail);
hexsphere.compute_triangle_normals();
let mut unique_hexsphere = icosahedron::Polyhedron::new();
unique_hexsphere.unique_vertices(hexsphere);
unique_hexsphere.assign_random_face_colors();
unique_hexsphere.fill_exports();
let positions = unsafe { Float32Array::view(&unique_hexsphere.export_positions) };
let normals = unsafe { Float32Array::view(&unique_hexsphere.export_normals) };
let colors = unsafe { Float32Array::view(&unique_hexsphere.export_colors) };
let cells = unsafe { Uint32Array::view(&unique_hexsphere.export_cells) };
Ok(Array::of4(&positions, &normals, &colors, &cells))
}
```
With wasm-pack, I could import the wasm package, run the `shape_data()`
function, and then read the contents as any other normal JS array.
```javascript
let rust = import("../pkg/index.js")
rust.then(module => {
const shapeData = module.shape_data()
const shape = {
positions: shapeData[0],
normals: shapeData[1],
colors: shapeData[2],
cells: shapeData[3],
}
...
})
```
I could side-step the issue of transferring data from Rust to JavaScript
entirely by programming literally everything in WebAssembly. But the bindings
from rust wasm to the WebGL API are still way too complicated compared to just
using regl. Plus, I'd have to implement my own camera from scratch.
### The Stats
So how much faster is Rust than JavaScript in generating icosahedrons and
hexspheres?
Here's how long it took with generating shapes in JS with Three.js in Firefox
versus in native Rust with a i5-2500K 3.3 GHz CPU.
| Shape | JS generate time | Rust generate time |
|---|---|---|
| Icosahedron detail 6 | 768 ms | 28.23 ms |
| Icosahedron detail 7 | 4.25 s | 128.81 ms |
| Hexsphere detail 6 | 11.37 s | 403.10 ms |
| Hexsphere detail 7 | 25.49 s | 1.85 s |
So much faster!
### Todo
* Add a process that alters the shape post-generation. Part of the reason why I
decided to fan the hexagon faces with so many triangles is that it also allows
me to control the height of the faces better. This could eventually allow me
to create mountain ranges and river valleys on a hexsphere planet. Stretching
and pulling the edges of the polygon faces in random directions could add
variation and make for a more organic looking hexsphere.
* Conversely, it would be nice to be able to run a process post-generation that
could reduce the number of triangles by tiling the hexagons more efficiently
when face elevation isn't needed.
* Add parameters to the generation that allows generating sections of the
hexsphere / icosahedron. This will be essential for rendering very detailed
polyhedrons since at a certain detail level it becomes impossible to render
the entire shape at once.
In WebGL, figure out what part of the shape is in the current viewport and
pass these parameters to the generation.
* Render the shapes in a native Rust graphics library instead of WebGL. I'm
curious how much slower WebGL is making things.
* Parallelize the generation. Right now the generation is very CPU bound and
each subdivide/truncate iteration is mostly independent from each other, so I
think I could get some decent speed-up by allowing the process to run on
multiple cores. Perhaps the [rayon](https://github.com/rayon-rs/rayon) crate
could make this pretty straightforward.
* Find some way to avoid unique vertices. The size of the shape is *much* bigger
because of this. There might be a way to keep shared vertices while also
having a separate color per face by using texture mapping.
* In the renderer, implement face selection (point and click face and show an
outline around selected face).
* In the renderer, implement fly-to-face zooming: given a face, fly the camera
around the sphere in an orbit and then zoom in on the face.

View File

@ -0,0 +1,686 @@
---
title: "Modmapper: Putting every Skyrim mod on a map with Rust"
layout: post
image: /img/blog/modmapper.jpg
---
[Modmapper](https://modmapper.com) is a website that I made that puts every mod
for the game [Elder Scrolls V:
Skyrim](https://en.wikipedia.org/wiki/The_Elder_Scrolls_V:_Skyrim) uploaded to
[Nexus Mods](https://www.nexusmods.com/) on an interactive map.
<a href="https://modmapper.com" target="_blank">
![Screenshot of modmapper.com](/img/blog/modmapper.jpg)
</a>
You can view the map at [https://modmapper.com](https://modmapper.com).
Released in 2011, Skyrim is over a decade old now. But, its vast modding
community has kept it alive and relevant to this day. [Skyrim is still in the
top 50 games being played on Steam in 2022](https://steamcharts.com/top/p.2) and
I think it's no coincidence that [it's also one of the most modded games
ever](https://www.nexusmods.com/games?).
<!--excerpt-->
The enormous and enduring modding community around the Elder Scrolls games is
why I have a special fondness for the series. I was 13 when I first got
interested in programming through [making mods for Elder Scrolls IV:
Oblivion](https://www.nexusmods.com/users/512579?tab=user+files&BH=2). I quickly
realized I got way more satisfaction out of modding the game than actually
playing it. I was addicted to being able to create whatever my mind imagined in
my favorite game.
I was working on mod for Skyrim earlier in the year[^bazaarrealm] and was
looking for the best places to put new buildings in the game world. I really
wanted areas of the game world off the beaten (heavily-modded) path. After over
a decade of modifications, there could be conflicts with hundreds of mods in any
area I chose which could cause issues like multiple buildings overlapping or
terrain changes causing floating rocks and trees.
<p>
<div class="row">
<figure>
<img alt="Example of a conflict between two mods that both chose the
same spot to put a lamp post and sign post so they are clipping"
src="/img/blog/modmapper-clipping-example2.jpg" />
<figurecaption>
<em>
Example of a conflict between two mods that both chose the same
spot to put a lamp post and sign post so they are clipping.
Screenshot by <a
href="https://www.nexusmods.com/users/63732336">
AndreySG</a>.
</em>
</figurecaption>
</figure>
<figure>
<img alt="Example of a conflict between two mods that both chose the
same spot to put a building and rock so they are clipping"
src="/img/blog/modmapper-clipping-example1.jpg" />
<figurecaption>
<em>
Conflict between a building and rock. Screenshot by <a
href="https://www.reddit.com/user/LewdManoSaurus">
LewdManoSaurus</a>.
</em>
</figurecaption>
</figure>
<figure>
<img alt="Example of a conflict between two mods that both chose the
same spot to put a building and tree so they are clipping"
src="/img/blog/modmapper-clipping-example3.jpg" />
<figurecaption>
<em>
Conflict between a building and a tree. Screenshot by <a
href="https://www.nexusmods.com/skyrimspecialedition/users/51448566">
Janquel</a>.
</em>
</figurecaption>
</figure>
<figure>
<img alt="Example of a conflict between two mods that both chose the
same spot to put a woodcutting mill"
src="/img/blog/modmapper-clipping-example4.jpg" />
<figurecaption>
<em>
Conflict between two woodcutting mills. Screenshot by <a
href="https://www.nexusmods.com/skyrimspecialedition/users/51448566">
Janquel</a>.
</em>
</figurecaption>
</figure>
</div>
</p>
Mod authors usually use a tool like
[TES5Edit](https://www.nexusmods.com/skyrim/mods/25859) to analyze a group of
mod plugins to find conflicts and create patches to resolve them on a
case-by-case basis. But, I was unsatisfied with that. I wanted to be assured
that there would be no conflicts, or at least know the set of all possible mods
out there that could conflict so I could manually patch those few mods. There
was no good solution for finding conflicts across all mods though. Mod authors
would need to download every Skyrim mod ever and no one has time to download all
85,000+ Skyrim mods, and no one has the computer memory to load all of those in
TES5Edit at the same time.
Through that frustration, Modmapper was born with the mission to create a
database of all Skyrim mod exterior cell edits. With that database I can power
the website which visualizes how popular cells are in aggregate as well as allow
the user to drill down to individual cells, mods, or plugins to find potential
conflicts without ever having to download files themselves.
When I [released the website about 7 months
ago](https://www.reddit.com/r/skyrimmods/comments/sr8k4d/modmapper_over_14_million_cell_edits_from_every/)
it made a big splash in the Skyrim modding community. No one had ever visualized
mods on a map like this before, and it gave everyone a new perspective on the
vast library of Skyrim mods. It was even [featured on the front page of PC
Gamer's
website](https://www.pcgamer.com/skyrim-modmapper-is-a-weirdly-beautiful-way-to-manage-your-mods/).
Thirteen-year-old me, who regularly read the monthly PC Gamer magazine, would
have been astounded.
<a
href="https://www.pcgamer.com/skyrim-modmapper-is-a-weirdly-beautiful-way-to-manage-your-mods/"
target="_blank">
![Screenshot of PC Gamer article titled "Skyrim Modmapper is a weirdly
beautiful way to manage your mods" by Robert Zak published April 20,
2022](/img/blog/modmapper-pcgamer.jpg)
</a>
The comments posted to the initial mod I posted on Nexus Mods[^takedown] for the
project were very amusing. It seemed to be blowing their minds:
> "Quite possibly this could be the best mod for
Skyrim. This hands-down makes everyone's life easier to be able to see which of
their mods might be conflicting." -- [Nexus Mods comment by
lorddonk](/img/blog/modmapper-comment15.png)
> "The 8th wonder of Skyrim. That's a Titan's work requiring a monk's
> perserverance. Finally, a place to go check (in)compatibilities !!! Voted.
> Endorsed." -- [Nexus Mods comment by
> jfjb2005](/img/blog/modmapper-comment3.png)
> "They shall sing songs of your greatness! Wow, just wow." -- [Nexus Mods
> comment by
LumenMystic](/img/blog/modmapper-comment7.png)
> "Holy Batman Tits! Be honest..... You're a Govt Agent and made this mod during
> your "Terrorist Watch Shift" using a CIA super computer.." -- [Nexus Mods
comment by toddrizzle](/img/blog/modmapper-comment1.png)
> "What drugs are you on and can I have some?" -- [Nexus Mods comment by
> thappysnek](/img/blog/modmapper-comment11.png)
> "This is madness! Author are some kind of overhuman?! GREAT work!"-- [Nexus
> Mods comment by TeodorWild](/img/blog/modmapper-comment10.png)
> "You are an absolute legend. Bards will sing tales of your exploits" -- [Nexus
> Mods comment by burntwater](/img/blog/modmapper-comment2.png)
> "I wanted to say something, but I'll just kneel before thee and worship. This
> would have taken me a lifetime. Amazing." -- [Nexus Mods comment by
> BlueGunk](/img/blog/modmapper-comment8.png)
> "Finally found the real dragonborn" -- [Nexus Mods comment by
> yag1z](/img/blog/modmapper-comment6.png)
> "he is the messiah!" -- [Nexus Mods comment by
> Cursedobjects](/img/blog/modmapper-comment12.png)
> "A god amongst men." -- [Nexus Mods comment by
> TheMotherRobbit](/img/blog/modmapper-comment13.png)
Apparently knowing how to program is now a god-like ability! This is the type of
feedback most programmers aspire to get from their users. I knew the tool was
neat and fun to build, but I didn't realize it was *that* sorely needed by the
community.
Today, Modmapper has a sustained user-base of around 7.5k unique visitors a
month[^analytics] and I still see it mentioned in reddit threads or discord
servers whenever someone is asking about the places a mod edits or what mods
might be conflicting in a particular cell.
The rest of this blog post will delve into how I built the website and how I
gathered all of the data necessary to display the visualization.
### Downloading ALL THE MODS!
![Meme with the title "DOWNLOAD ALL THE MODS!"](/img/blog/allthemods.jpg)
In order for the project to work I needed to collect all the Skyrim mod plugin
files.
While there are a number of places people upload Skyrim mods, [Nexus
Mods](https://nexusmods.com) is conveniently the most popular and has the vast
majority of mods. So, I would only need to deal with this one source. Luckily,
[they have a nice API
handy](https://app.swaggerhub.com/apis-docs/NexusMods/nexus-mods_public_api_params_in_form_data/1.0).
[modmapper](https://github.com/thallada/modmapper) is the project I created to
do this. It is a Rust binary that:
* Uses [reqwest](https://crates.io/crates/reqwest) to make requests to [Nexus
Mods](https://nexusmods.com) for pages of last updated mods.
* Uses [scraper](https://crates.io/crates/scraper) to scrape the HTML for
individual mod metadata (since the Nexus API doesn't provide an endpoint to
list mods).
* Makes requests to the Nexus Mods API to get file and download information for
each mod, using [serde](https://serde.rs/) to parse the
[JSON](https://en.wikipedia.org/wiki/JSON) responses.
* Requests the content preview data for each file and walks through the list of
files in the archive looking for a Skyrim plugin file (`.esp`, `.esm`, or
`.esl`).
* If it finds a plugin, it decides to download the mod. It hits the download API
to get a download link and downloads the mod file archive.
* Then it extracts the archive using one of:
[compress_tools](https://crates.io/crates/compress-tools),
[unrar](https://crates.io/crates/unrar), or [7zip](https://www.7-zip.org/) via
[`std::process::Commmand`](https://doc.rust-lang.org/std/process/struct.Command.html)
(depending on what type of archive it is).
* With the ESP files (Elder Scrolls Plugin files) extracted, I then use my
[skyrim-cell-dump](https://github.com/thallada/skyrim-cell-dump) library (more
on that later!) to extract all of the cell edits into structured data.
* Uses [seahash](https://crates.io/crates/seahash) to create a fast unique hash
for plugin files.
* It then saves all of this data to a [postgres](https://www.postgresql.org/)
database using the [sqlx crate](https://crates.io/crates/sqlx).
* Uses extensive logging with the [tracing
crate](https://crates.io/crates/tracing) so I can monitor the output and have
a history of a run to debug later if I discover an issue.
It is designed to be run as a nightly [cron](https://en.wikipedia.org/wiki/Cron)
job which downloads mods that have updated on Nexus Mods since the last run.
To keep costs for this project low, I decided to make the website entirely
static. So, instead of creating an API server that would have to be constantly
running to serve requests from the website by making queries directly to the
database, I would dump all of the data that the website needed from the database
to JSON files, then upload those files to [Amazon
S3](https://aws.amazon.com/s3/) and serve them through the [Cloudflare
CDN](https://www.cloudflare.com/cdn/) which has servers all over the world.
So, for example, every mod in the database has a JSON file uploaded to
`https://mods.modmapper.com/skyrimspecialedition/<nexus_mod_id>.json` and the
website frontend will fetch that file when a user clicks a link to that mod in
the UI.
The cost for S3 is pretty reasonable to me ($~3.5/month), and Cloudflare has a
[generous free tier](https://www.cloudflare.com/plans/#price-matrix) that allows
me to host everything through it for free.
The server that I actually run `modmapper` on to download all of the mods is a
server I already have at home that I also use for other purposes. The output of
each run is uploaded to S3, and I also make a full backup of the database and
plugin files to [Dropbox](https://www.dropbox.com).
A lot of people thought it was insane that I downloaded every mod[^adult-mods],
but in reality it wasn't too bad once I got all the issues resolved in
`modmapper`. I just let it run in the background all day and it would chug
through the list of mods one-by-one. Most of the time ended up being spent
waiting while the Nexus Mod's API hourly rate limit was reached on my
account.[^rate-limit]
As a result of this project I believe I now have the most complete set of all
Skyrim plugins to date (extracted plugins only without other textures, models,
etc.)[^plugin-collection]. Compressed, it totals around 99 GB, uncompressed: 191
GB.
[After I downloaded Skyrim Classic mods in addition to Skyrim Special
Edition](#finishing-the-collection-by-adding-all-skyrim-classic-mods), here are
some counts from the database:
|:---|:---|
| **Mods** | 113,028 |
| **Files** | 330,487 |
| **Plugins** | 534,831 |
| **Plugin Cell Edits** | 33,464,556 |
### Parsing Skyrim plugin files
The Skyrim game engine has a concept of
[worldspaces](https://en.uesp.net/wiki/Skyrim:Worldspaces) which are exterior
areas where the player can travel to. The biggest of these being, of course,
Skyrim itself (which, in the lore, is a province of the continent of
[Tamriel](https://en.uesp.net/wiki/Lore:Tamriel) on the planet
[Nirn](https://en.uesp.net/wiki/Lore:Nirn)). Worldspaces are recorded in a
plugin file as [WRLD
records](https://en.uesp.net/wiki/Skyrim_Mod:Mod_File_Format/WRLD).
Worldspaces are then chunked up into a square grid of cells. The Skyrim
worldspace consists of a little over 11,000 square cells. Mods that make a
changes to the game world have a record in the plugin (a [CELL
record](https://en.uesp.net/wiki/Skyrim_Mod:Mod_File_Format/CELL)) with the
cell's X and Y coordinates and a list changes in that cell.
There is some prior art ([esplugin](https://github.com/Ortham/esplugin),
[TES5Edit](https://github.com/TES5Edit/TES5Edit),
[zedit](https://github.com/z-edit/zedit)) of open-source programs that could
parse Skyrim plugins and extract this data. However, all of these were too broad
for my purpose or relied on the assumption of being run in the context of a load
order where the master files of a plugin would also be available. I wanted a
program that could take a single plugin in isolation and skip through all of the
non-relevant parts of it and dump just the CELL and WRLD record data plus some
metadata about the plugin from the header as fast as possible.
After discovering [the wonderful documentation on the UESP wiki about the Skyrim
mod file format](https://en.uesp.net/wiki/Skyrim_Mod:Mod_File_Format), I
realized this would be something that would be possible to make myself.
[skyrim-cell-dump](https://github.com/thallada/skyrim-cell-dump) is a Rust
library/CLI program that accepts a Skyrim mod file and spits out the header
metadata of the plugin, the worlds edited/created, and all of the cells it
edits/creates.
Under the hood, it uses the [nom crate](https://crates.io/crates/nom) to read
through the plugin until it finds the relevant records, then uses
[flate2](https://crates.io/crates/flate2) to decompress any compressed record
data, and finally outputs the extracted data formatted to JSON with
[serde](https://crates.io/crates/serde).
Overall, I was pretty happy with this toolkit of tools and was able to quickly
get the data I needed from plugins. My only gripe was that I never quite figured
out how to properly do error handling with nom. If there was ever an error, I
didn't get much data in the error about what failed besides what function it
failed in. I often had to resort to peppering in a dozen `dbg!()` statements to
figure out what went wrong.
I built it as both a library and binary crate so that I could import it in other
libraries and get the extracted data directly as Rust structs without needing to
go through JSON. I'll go more into why this was useful later.
### Building the website
Since I wanted to keep server costs low and wanted the site to be as fast as
possible for users, I decided pretty early on that the site would be purely
static HTML and JavaScript with no backend server. I decided to use the [Next.js
web framework](https://nextjs.org/) with
[TypeScript](https://www.typescriptlang.org/) since it was what I was familiar
with using in my day job. While it does have [server-side rendering
support](https://nextjs.org/docs/basic-features/pages#server-side-rendering)
which would require running a backend [Node.js](https://nodejs.org/en/) server,
it also supports a limited feature-set that can be [exported as static
HTML](https://nextjs.org/docs/advanced-features/static-html-export).
I host the site on [Cloudflare pages](https://pages.cloudflare.com/) which is
available on their free tier and made deploying from Github commits a
breeze[^cloudflare]. The web code is in my [modmapper-web
repo](https://github.com/thallada/modmapper-web).
The most prominent feature of the website is the interactive satellite map of
Skyrim. Two essential resources made this map possible: [the map tile images
from the UESP skyrim map](https://srmap.uesp.net/) and
[Mapbox](https://www.mapbox.com/).
[Mapbox provides a JS library for its WebGL
map](https://docs.mapbox.com/mapbox-gl-js/api/) which allows specifying a
[raster tile
source](https://docs.mapbox.com/mapbox-gl-js/example/map-tiles/)[^3d-terrain].
The [UESP team painstakingly loaded every cell in the Skyrim worldspace in the
Creation Kit and took a
screenshot](https://en.uesp.net/wiki/UESPWiki:Skyrim_Map_Design). Once I figured
out which image tiles mapped to which in-game cell it was relatively easy to put
a map together by plugging them into the Mapbox map as a raster tile source.
The heatmap overlaid on the map is created using a [Mapbox
layer](https://docs.mapbox.com/help/glossary/layer/) that fills a cell with a
color on a gradient from green to red depending on how many edits that cell has
across the whole database of mods.
![Screenshot closeup of modmapper.com displaying a grid of colored cells from
green to red overlaid atop a satellite map of
Skyrim](/img/blog/modmapper-heatmap-closeup.jpg)
The sidebar on the site is created using [React](https://reactjs.org/) and
[Redux](https://redux.js.org/) and uses the
[next/router](https://nextjs.org/docs/api-reference/next/router) to keep track
of which page the user is on with URL parameters.
<p>
<div class="row">
<img alt="Screenshot of modmapper.com sidebar with a cell selected"
src="/img/blog/modmapper-cell-sidebar.jpg" class="half-left" />
<img alt="Screenshot of modmapper.com sidebar with a mod selected"
src="/img/blog/modmapper-mod-sidebar.jpg" class="half-right" />
</div>
</p>
The mod search is implemented using
[MiniSearch](https://lucaong.github.io/minisearch/) that asynchronously loads
the giant search indices for each game containing every mod name and id.
![Screenshot of modmapper.com with "trees" entered into the search bar with a
number of LE and SE mod results listed underneath in a
dropdown](/img/blog/modmapper-search.jpg)
One of the newest features of the site allows users to drill down to a
particular plugin within a file of a mod and "Add" it to their list. All of the
added plugins will be listed in the sidebar and the cells they edit displayed in
purple outlines and conflicts between them displayed in red outlines.
![Screenshot of modmapper.com with 4 Added Plugins and the map covered in purple
and red boxes](/img/blog/modmapper-added-plugins.jpg)
### Loading plugins client-side with WebAssembly
A feature that many users requested after the initial release was being able to
load a list of the mods currently installed on their game and see which ones of
that set conflict with each other[^second-announcement]. Implementing this
feature was one of the most interesting parts of the project. Choosing to use
Rust made made it possible, since everything I was running server-side to
extract the plugin data could also be done client-side in the browser with the
same Rust code compiled to [WebAssembly](https://webassembly.org/).
I used [wasm-pack](https://github.com/rustwasm/wasm-pack) to create
[skyrim-cell-dump-wasm](https://github.com/thallada/skyrim-cell-dump-wasm/)
which exported the `parse_plugin` function from my
[skyrim-cell-dump](https://github.com/thallada/skyrim-cell-dump) Rust library
compiled to WebAssembly. It also exports a `hash_plugin` function that creates a
unique hash for a plugin file's slice of bytes using
[seahash](https://crates.io/crates/seahash) so the site can link plugins a user
has downloaded on their hard-drive to plugins that have been downloaded by
modmapper and saved in the database.
Dragging-and-dropping the Skyrim Data folder on to the webpage or selecting the
folder in the "Open Skyrim Data directory" dialog kicks off a process that
starts parsing all of the plugin files in that directory in parallel using [Web
Workers](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers).
I developed my own
[`WorkerPool`](https://github.com/thallada/modmapper-web/blob/4af628559030c3f24618b29b46d4a40af2f200a6/lib/WorkerPool.ts)
that manages creating a pool of available workers and assigns them to plugins to
process. The pool size is the number of cores on the user's device so that the
site can process as many plugins in parallel as possible. After a plugin
finishes processing a plugin and sends the output to the redux store, it gets
added back to the pool and is then assigned a new plugin to process if there are
any[^wasm-troubles].
Once all plugins have been loaded, the map updates by displaying all of the
cells edited in a purple box and any cells that are edited by more than one
plugin in a red box.
![Screenshot of modmapper.com with 74 Loaded Plugins and the map filled with
purple and red boxes](/img/blog/modmapper-loaded-plugins.jpg)
Users can also drag-and-drop or paste their `plugins.txt` file, which is the
file that the game uses to define the load order of plugins and which plugins
are enabled or disabled. Adding the `plugins.txt` sorts the list of loaded
plugins in the sidebar in load order and enables or disables plugins as defined
in the `plugins.txt`.
![Screenshot of modmapper.com with the Paste plugins.txt dialog
open](/img/blog/modmapper-pluginstxt-dialog.jpg)
Selecting a cell in the map will display all of the loaded cells that edit that
cell in the sidebar.
![Screenshot of modmapper.com with a conflicted cell selected on the map and 4
Loaded Plugins displayed](/img/blog/modmapper-conflicted-cell.jpg)
The ability to load plugins straight from a user's hard-drive allows users to
map mods that haven't even been uploaded to Nexus Mods.
### Vortex integration
The initial mod I released on the Skyrim Special Edition page of Nexus Mods was
[taken
down](https://www.reddit.com/r/skyrimmods/comments/svnz4a/modmapper_got_removed/)
by the site admins since it didn't contain an actual mod and they didn't agree
that it qualified as a "Utility".
Determined to have an actual mod page for Modmapper on Nexus Mods, I decided to
make a [Vortex](https://www.nexusmods.com/about/vortex/) integration for
modmapper. Vortex is a mod manager made by the developers of Nexus Mods and they
allow creating extensions to the tool and have their own [mod section for Vortex
extensions](https://www.nexusmods.com/site).
With the help of [Pickysaurus](https://www.nexusmods.com/skyrim/users/31179975),
one of the community managers for Nexus Mods, I created a [Vortex integration
for Modmapper](https://www.nexusmods.com/site/mods/371). It adds a context menu
option on mods to view the mod in Modmapper with all of the cells it edits
selected in purple. It also adds a button next to every plugin file to view just
that plugin in Modmapper (assuming it has been processed by Modmapper).
<p>
<div class="row">
<img alt="Screenshot of Vortex mod list with a mod context menu open which
shows a 'See on Modmapper' option"
src="/img/blog/modmapper-vortex-mod-menu.jpg" class="half-left" />
<img alt="Screenshot of Vortex plugin list with 'See on Modmapper' buttons
on the right of each plugin row"
src="/img/blog/modmapper-vortex-plugin-button.jpg" class="half-right" />
</div>
</p>
To enable the latter part, I had to include `skyrim-cell-dump-wasm` in the
extension so that I could hash the plugin contents with `seahash` to get the
same hash that Modmapper would have generated. It only does this hashing when
you click the "See on Modmapper" button to save from excessive CPU usage when
viewing the plugin list.
After releasing the Vortex plugin, Pickysaurus [published a news article about
modmapper to the Skyrim Special Edition
site](https://www.nexusmods.com/skyrimspecialedition/news/14678) which also got
a lot of nice comments ❤️.
### Finishing the collection by adding all Skyrim Classic mods
Skyrim is very silly in that it has [many
editions](https://ag.hyperxgaming.com/article/12043/every-skyrim-edition-released-over-the-last-decade).
But there was only one that split the modding universe into two: [Skyrim Special
Edition (SE)](https://en.uesp.net/wiki/Skyrim:Special_Edition).
It was released in October 2016 with a revamped game engine that brought some
sorely needed graphical upgrades. However, it also contained changes to how mods
worked, requiring all mod authors to convert their mods to SE. This created big
chasm in the library of mods, and Nexus Mods had to make a separate section for
SE-only mods.
When I started downloading mods in 2021, I started only with Skyrim SE mods,
which, at the time of writing, totals at over [55,000 mods on Nexus
Mods](https://www.nexusmods.com/skyrimspecialedition/mods/).
After releasing with just SE mods, many users requested that all of the classic
pre-SE Skyrim mods be added as well. This month, I finally finished downloading
all Skyrim Classic mods, which, at the time of writing, totals at over [68,000
mods on Nexus Mods](https://www.nexusmods.com/skyrim/mods/). That brings the
total downloaded and processed mods for Modmapper at over 113,000
mods[^adult-mods]!
### The future
A lot of users had great feedback and suggestions on what to add to the site. I
could only implement so many of them, though. The rest I've been keeping track
of on [this Trello board](https://trello.com/b/VdpTQ7ar/modmapper).
Some of the headline items on it are:
* Add [Solstheim map](https://dbmap.uesp.net/)
Since map tiles images are available for that worldspace and because I have
already recorded edits to the worldspace in my database, it shouldn't be too
terribly difficult.
* Add [Mod Organizer 2](https://www.modorganizer.org/) plugin
Lots of people requested this since it's a very popular mod manager compared
to Vortex. MO2 supports python extensions so I created
[skyrim-cell-dump-py](https://github.com/thallada/skyrim-cell-dump-py) to
export the Rust plugin processing code to a Python library. I got a bit stuck
on actually creating the plugin though, so it might be a while until I get to
that.
* Find a way to display interior cell edits on the map
The map is currently missing edits to interior cells. Since almost all
interior cells in Skyrim have a link to the exterior world through a door
teleporter, it should be possible to map an interior cell edit to an exterior
cell on the map based on which cell the door leads out to.
That will require digging much more into the plugin files for more data, and
city worldspaces will complicate things further. Then there's the question of
interiors with multiple doors to different exterior cells, or interior cells
nested recursively deep within many other interior cells.
* Create a standalone [Electron](https://www.electronjs.org) app that can run
outside the browser
I think this would solve a lot of the issues I ran into while developing the
website. Since Electron has a Node.js process running on the user's computer
outside the sandboxed browser process, it gives me much more flexibility. It
could do things like automatically load a user's plugin files. Or just load
plugins at all wihtout having to deal with the annoying dialog that lies to
the user saying they're about to upload their entire Data folder hundreds of
gigabytes full of files to a server (I really wish the
[HTMLInputElement.webkitdirectory](https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/webkitdirectory)
API would use the same underlying code as the [HTML Drag and Drop
API](https://developer.mozilla.org/en-US/docs/Web/API/HTML_Drag_and_Drop_API)
which is a lot better).
* Improving the search
The mod search feature struggles the most with the static generated nature of
the site. I found it very hard to pack all of the necessary info for the
search index for all 100k+ mods (index for both SE and LE is around 6 MB).
Asynchronously loading the indices with MiniSearch keeps it from freezing up
the browser, but it does take a very long time to fully load. I can't help
think that there's a better way to shard the indices somehow and only fetch
what I need based on what the user is typing into the search.
To be clear, a lot of the Todos on the board are pipe-dreams. I may never get to
them. This project is sustained purely by my motivation and self-interests and
if something is too much of a pain to get working I'll just drop it.
There will also be future Elder Scrolls games, and [future Bethesda games based
on roughly the same game engine](https://bethesda.net/en/game/starfield). It
would be neat to create similar database for those games as the modding
community develops in realtime.
Overall, I'm glad I made something of use to the modding community. I hope to
keep the site running for as long as people are modding Skyrim (until the
heat-death of the universe, probably).
<br>
---
#### Footnotes
[^bazaarrealm]:
Unfortunately, I basically lost interest on the mod after working on
Modmapper. I might still write a blog post about it eventually since I did a
lot of interesting hacking on the Skyrim game engine to try to add some
asynchronous multiplayer aspects. [Project is here if anyone is curious in
the meantime](https://github.com/thallada/BazaarRealmPlugin).
[^takedown]:
I sadly only have screenshots for some of the comments on that mod since it
was eventually taken down by the Nexus Mod admins. See explanation about
that in the [Vortex integration section](#vortex-integration).
[^analytics]:
As recorded by Cloudflare's server side analytics, which may record a fair
amount of bot traffic. I suspect this is the most accurate number I can get
since most of my users probably use an ad blocker that blocks client-side
analytics.
[^adult-mods]:
Every mod on Nexus Mods except for adult mods since the site restricts
viewing adult mods to only logged-in users and I wasn't able to get my
scraping bot to log in as a user.
[^rate-limit]:
Apparently my mass-downloading did not go unnoticed by the Nexus Mod admins.
I think it's technically against their terms of service to automatically
download mods, but I somehow got on their good side and was spared the
ban-hammer. I don't recommend anyone else run modmapper themselves on the
entire site unless you talk to the admins beforehand and get the okay from
them.
[^plugin-collection]:
If you would like access to this dataset of plugins to do some data-mining
please reach out to me at [tyler@hallada.net](mailto:tyler@hallada.net)
(Note: only contains plugins files, no models, textures, audio, etc.). I
don't plan on releasing it publicly since that would surely go against many
mod authors' wishes/licenses.
[^cloudflare]:
I'm not sure I want to recommend anyone else use Cloudflare after [the whole
Kiwi Farms
debacle](https://www.theverge.com/2022/9/6/23339889/cloudflare-kiwi-farms-content-moderation-ddos).
I now regret having invested so much of the infrastructure in them. However,
I'm only using their free-tier, so at least I am a net-negative for their
business? I would recommend others look into
[Netlify](https://www.netlify.com/) or [fastly](https://www.fastly.com/) for
similar offerings to Cloudflare pages/CDN.
[^3d-terrain]:
I also tried to add a [raster Terrain-DEM
source](https://docs.mapbox.com/data/tilesets/reference/mapbox-terrain-dem-v1/)
for rendering the terrain in 3D. I got fairly far [generating my own DEM RGB
tiles](https://github.com/syncpoint/terrain-rgb) from an upscaled [greyscale
heightmap](https://i.imgur.com/9RErBDo.png) [constructed from the LAND
records in Skyrim.esm](https://www.nexusmods.com/skyrim/mods/80692) (view it
[here](https://www.dropbox.com/s/56lffk021riil6h/heightmap-4x_foolhardy_Remacri_rgb.tif?dl=0)).
But, it came out all wrong: [giant cliffs in the middle of the
map](/img/blog/modmapper-terrain-cliff.jpg) and [tiny spiky lumps with big
jumps in elevation at cell boundaries](/img/blog/modmapper-bad-terrain.jpg).
Seemed like too much work to get right than it was worth it.
[^second-announcement]:
[This was the announcement I posted to /r/skyrimmods for this feature](
https://www.reddit.com/r/skyrimmods/comments/ti3gjh/modmapper_update_load_plugins_in_your_load_order/)
[^wasm-troubles]:
At first, I noticed a strange issue with re-using the same worker on
different plugins multiple times. After a while (~30 reuses per worker), the
processing would slow to a crawl and eventually strange things started
happening (I was listening to music in my browser and it started to pop and
crack). It seemed like the speed of processing increased exponentially to
the number of times the worker was reused. So, to avoid this, I had to make
the worker pool terminate and recreate workers after every plugin processed.
This ended up not being as slow as it sounds and worked fine. However, I
recently discovered that [wee_alloc, the most suggested allocator to use
with rust in wasm, has a memory leak and is mostly unmaintained
now](https://www.reddit.com/r/rust/comments/x1cle0/dont_use_wee_alloc_in_production_code_targeting/).
I switched to the default allocator and I didn't run into the exponentially
slow re-use problem. For some reason, the first run on a fresh tab is always
much faster than the second run, but subsequent runs are still fairly stable
in processing time.

BIN
assets/avenue_poplars.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

BIN
assets/blurry_clouds.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

BIN
assets/blurry_clouds2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

BIN
assets/boston_aerial.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 815 KiB

BIN
assets/boston_skyline.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 160 KiB

BIN
assets/copenhagen_park.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 857 KiB

BIN
assets/cyrodiil_ingame.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 373 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 504 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 335 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 336 KiB

BIN
assets/desert_satellite.jpg Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 656 KiB

BIN
assets/fallout4_map.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

BIN
assets/forest_hill.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.0 MiB

BIN
assets/forrest_autumn.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 692 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 247 KiB

BIN
assets/halo_shenandoah.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 329 KiB

BIN
assets/head_lioness.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

BIN
assets/jungle_nyc.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 307 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

BIN
assets/kaelan_terrain1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

BIN
assets/kaelan_terrain2.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

BIN
assets/kaelan_terrain3.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

BIN
assets/kowloon.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 283 KiB

BIN
assets/kowloon_ngs.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 423 KiB

BIN
assets/lion_portrait.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 324 KiB

BIN
assets/me.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 412 KiB

BIN
assets/mexico_city.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 349 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 828 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 848 KiB

BIN
assets/minecraft_map.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

BIN
assets/mountain_brook.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 311 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 372 KiB

BIN
assets/mountain_view.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 410 KiB

BIN
assets/ngs_map.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1017 KiB

BIN
assets/nyc.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 745 KiB

BIN
assets/pixels.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

BIN
assets/pockets.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 763 KiB

BIN
assets/pockets_portrait.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 472 KiB

BIN
assets/poplars.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 336 KiB

BIN
assets/rainforest.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 227 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 630 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 538 KiB

BIN
assets/river_satellite.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 471 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 332 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 269 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 720 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 463 KiB

BIN
assets/satellite_terrain3.png Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 337 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

BIN
assets/side_portrait.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 MiB

BIN
assets/sitting_forest.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.5 MiB

BIN
assets/stained_glass.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 621 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 551 KiB

BIN
assets/standing_forest.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.7 MiB

BIN
assets/starry_boston.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 428 KiB

BIN
assets/starry_night.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 388 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

BIN
assets/treatment_plant.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 368 KiB

BIN
assets/uk_satellite.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 293 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 448 KiB

View File

@ -4,40 +4,42 @@ title: Tyler Hallada - Blog
--- ---
{% for post in paginator.posts %} {% for post in paginator.posts %}
<div class="card"> {% if post.hidden == null or post.hidden == false %}
<div class="row clearfix post-header"> <div class="card">
<div class="column three-fourths"> <div class="row clearfix">
<a href="{{ post.url }}"><h2 class="post-title">{{ post.title }}</h2></a> <div class="column full post-header">
<h2 class="post-title"><a href="{{ post.url }}">{{ post.title }}</a></h2>
<span class="timestamp">{{ post.date | date_to_string }}</span>
</div>
</div> </div>
<div class="column fourth"> <div class="row clearfix">
<span class="timestamp">{{ post.date | date_to_string }}</span> <div class="column full post">
{{ post.excerpt }}
</div>
</div>
<div class="row clearfix more-row">
<div class="column full">
<a href="{{ post.url }}" class="read-more">Read More &raquo;</a>
</div>
</div> </div>
</div> </div>
<div class="row clearfix"> {% endif %}
<div class="column full post">
{{ post.excerpt }}
</div>
</div>
<div class="row clearfix more-row">
<div class="column full">
<a href="{{ post.url }}" class="read-more">Read More &raquo;</a>
</div>
</div>
</div>
{% endfor %} {% endfor %}
<div class="row clearfix pagination"> <div class="row clearfix pagination">
<div class="column full"> <div class="column full">
{% if paginator.previous_page %} {% if paginator.previous_page %}
{% if paginator.page == 2 %} {% if paginator.page == 2 %}
<a href="/blog" class="previous">&laquo; Previous</a> <a href="/blog/" class="previous">&laquo; Previous</a>
{% else %} {% else %}
<a href="/blog/page{{ paginator.previous_page }}" class="previous">&laquo; Previous</a> <a href="/blog/page{{ paginator.previous_page }}/" class="previous">&laquo; Previous</a>
{% endif %} {% endif %}
{% endif %} {% endif %}
<span class="page_number ">Page {{ paginator.page }} of {{ paginator.total_pages }}</span> <span class="page_number ">Page {{ paginator.page }} of {{ paginator.total_pages }}</span>
{% if paginator.next_page %} {% if paginator.next_page %}
<a href="/blog/page{{ paginator.next_page }}" class="next">Next &raquo;</a> <a href="/blog/page{{ paginator.next_page }}/" class="next">Next &raquo;</a>
{% endif %} {% endif %}
</div> </div>
</div> </div>
{% include mail-form.html %}

View File

@ -13,12 +13,21 @@
} }
html { html {
width: 100%; background-color: whitesmoke;
height: 100%; margin: 0;
background: whitesmoke; padding: 0;
} }
body { body {
margin: 0;
padding: 0;
width: 100%;
height: 100%;
min-height: 100vh;
background-color: whitesmoke;
}
.root {
font-family: 'Open Sans', Arial, sans-serif; font-family: 'Open Sans', Arial, sans-serif;
font-style: normal; font-style: normal;
font-weight: 400; font-weight: 400;
@ -39,9 +48,39 @@ a {
} }
p { p {
margin: 0 0 10px; margin: 0 0 12px;
font-size: 1rem;
line-height: 1.45rem;
} }
img + em {
display: block;
margin-top: 4px;
}
li {
margin-bottom: 12px;
}
table {
min-width: 85%;
margin: 1em auto 1em;
border: 1px solid #cbcbcb;
}
td, th {
padding: 0.5em 0.5em;
}
td {
border: 1px solid #cbcbcb;
}
tr:nth-child(2n-1) td {
background-color: #f2f2f2;
}
a:visited { a:visited {
color:#855C85; color:#855C85;
} }
@ -89,10 +128,14 @@ blockquote p {
.container { .container {
margin: 0 auto; margin: 0 auto;
max-width: 48rem; max-width: 48rem;
width: 90%; width: 100%;
} }
@media (min-width: 40rem) { @media (min-width: 40rem) {
.container {
width: 90%;
}
.column { .column {
float: left; float: left;
padding-left: 1rem; padding-left: 1rem;
@ -105,6 +148,8 @@ blockquote p {
.column.third { width: 33.3%; } .column.third { width: 33.3%; }
.column.fourth { width: 25%; } .column.fourth { width: 25%; }
.column.three-fourths { width: 75%; } .column.three-fourths { width: 75%; }
.column.fifth { width: 20%; }
.column.four-fifths { width: 80%; }
.column.flow-opposite { float: right; } .column.flow-opposite { float: right; }
} }
@ -158,17 +203,15 @@ img { width: auto; max-width: 100%; height: auto; }
.hide-mobile { .hide-mobile {
display: none; display: none;
} }
.hide-desktop-inline-block { .hide-desktop-inline-block { display: inline-block }
display: inline-block; .hide-desktop-block { display: block }
} .hide-desktop-inline { display: inline }
.hide-desktop-block {
display: block;
}
@media (min-width: 40rem) { @media (min-width: 40rem) {
.hide-desktop { display: none } .hide-desktop { display: none }
.hide-mobile-inline-block { display: inline-block } .hide-mobile-inline-block { display: inline-block }
.hide-mobile-block { display: block } .hide-mobile-block { display: block }
.hide-mobile-inline { display: inline }
} }
/*****************************************************************************/ /*****************************************************************************/
@ -183,10 +226,12 @@ h1.title {
font-size: 2rem; font-size: 2rem;
font-weight: 200; font-weight: 200;
margin: 0; margin: 0;
margin-left: 0.5rem;
} }
div.header { div.header {
margin: 20px 0 20px; /* this is padding instead of margin to prevent <html> from poking out behind top of page */
padding: 20px 0 20px;
text-align: center; text-align: center;
} }
@ -201,18 +246,18 @@ div.header a {
} }
span.timestamp { span.timestamp {
display: block;
font-size: 0.85rem; font-size: 0.85rem;
margin-top: 7px; margin-top: 8px;
}
@media (min-width: 40rem) {
span.timestamp {
float: right;
}
} }
.post-header { .post-header {
padding-bottom: 10px; padding-bottom: 24px;
display: flex;
align-items: start;
justify-content: space-between;
column-gap: 12px;
flex-wrap: wrap;
} }
a.read-more { a.read-more {
@ -242,7 +287,7 @@ div.more-row {
div.pagination { div.pagination {
text-align: center; text-align: center;
margin-bottom: 10px; margin-bottom: 20px;
} }
div.rss { div.rss {
@ -317,6 +362,11 @@ a.rss img {
background-color: #333; background-color: #333;
} }
.post img {
display: block;
margin: 0 auto;
}
.post img.half-left { .post img.half-left {
float: none; float: none;
width: 100%; width: 100%;
@ -340,6 +390,58 @@ a.rss img {
} }
} }
.post .row {
display: flex;
align: center;
justify-content: center;
gap: 8px 8px;
flex-wrap: wrap;
}
.post .row figure {
flex-basis: calc(50% - 8px);
}
@media (max-width: 40rem) {
.post .row {
flex-direction: column;
}
}
.post figure {
margin: 0;
}
.post figure figurecaption {
display: block;
margin-top: 4px;
}
/*****************************************************************************/
/*
/* Subscribe form
/*
/*****************************************************************************/
.subscribe-form h3 {
margin-top: 10px;
margin-left: 10px;
}
.subscribe-form input {
margin: 10px;
}
.subscribe-form label {
margin-left: 10px;
}
.subscribe-form span.form-rss {
display: block;
margin-top: 20px;
margin-left: 10px;
}
/*****************************************************************************/ /*****************************************************************************/
/* /*
/* Homepage /* Homepage
@ -362,7 +464,6 @@ a.rss img {
background: white; background: white;
margin: 0 0.5rem 1rem; margin: 0 0.5rem 1rem;
border-radius: 3px; border-radius: 3px;
user-select: none;
} }
h1.big-name { h1.big-name {
@ -493,3 +594,42 @@ div.options-panel form {
div.options-panel form label { div.options-panel form label {
display: block; display: block;
} }
/*****************************************************************************/
/*
/* Comments (isso)
/*
/*****************************************************************************/
.isso-postbox .textarea-wrapper {
margin-top: 10px;
margin-bottom: 10px;
}
div.isso-postbox div.form-wrapper section.auth-section p.input-wrapper {
margin-right: 10px;
}
/*****************************************************************************/
/*
/* Light & Dark Theme Toggle
/*
/*****************************************************************************/
div.theme-toggle {
position: static;
bottom: 0;
width: 100%;
text-align: center;
z-index: 2;
padding: 0.25rem 0.75rem;
}
@media (min-width: 40rem) {
div.theme-toggle {
position: absolute;
top: 0;
right: 0;
width: inherit;
}
}

40
css/main_dark.css Normal file
View File

@ -0,0 +1,40 @@
/* Dark theme */
html {
filter: invert(90%);
background-color: rgba(10, 10, 10, 0.9);
}
img:not(.icon), video, div.video-container {
filter: invert(100%);
}
img:not(.icon) {
opacity: .90;
transition: opacity .5s ease-in-out;
}
img:hover:not(.icon) {
opacity: 1;
}
a:not(.card), a:hover:not(.card) {
filter: invert(100%);
}
a:not(.card) img:not(.icon) {
filter: none;
}
.card {
box-shadow: 0 1px 2px #fff !important;
}
.post pre {
background-color: #fff !important;
}
.post a code {
background-color: #222 !important;
border: 1px solid #333 !important;
}

Some files were not shown because too many files have changed in this diff Show More