Thursday, December 29, 2011

LuaJSON 1.3 Released

LuaJSON 1.3 is released to the wild!
Major changes made:
  • new 'nothrow' global option - currently a trivial 'pcall' hook on decode, but could later resolve to a more efficient handler
  • enhanced error output from the 'next' branch
  • hopefully "stackless" parser - limit of parsing depth now as large as heap and not stack

Rockspec for those using LuaRocks if it hasn't hit their repo yet: https://raw.github.com/harningt/luajson/b7cb1e6221ae6b70b208b242c1654da39087230d/rockspecs/luajson-1.3-1.rockspec
GitHub link for signed tag: https://github.com/harningt/luajson/tree/1.3
Release download tarball: https://github.com/downloads/harningt/luajson/luajson-1.3.tar.gz

The "stackless" parser is basically a linearization of the LPeg parser so that it parses JSON tokens and passes them into a stack/state-machine implemented as a Lua state object. This degrades performance slightly in the pre 1.2.1 era, but removes C/LPeg stack depth problems encountered in all prior implementations (including the abominably slow 1.2.2 version that reduced the problem, but didn't solve it).

While profiling performance, I found that some unrolling versions seemed to be a wee bit faster, but used more memory and had more pagefaults. My focus on future releases of LuaJSON will be on speed, including a possibility of 2.0 release breaking compatibility in the interest of getting > 2 MB/s JSON parsing performance. Looking at the strict YAJL parser - it hits >100 MB/s on the same data where I get 2 MB/s at best. If I can pull in an MIT-like-licensed C-based JSON parser into Lua and get >= 10x performance, I may add an option for the parser to try to consume the other parser under the LuaJSON interface (for uniformity).

Later I plan on enhancing encoding performance, but for the time being, decoding is most important right now.

Tuesday, June 21, 2011

CryptoFace Digest Design Oops

Designing an interface for managing a library of cryptographic digests seems so easy, right?

Select a digest from a list, process data, get a hash... all there is to it, right?

WRONG!

While pulling in another digest provider, Botan, I found some items that did not fit into the simple model. Namely the configurability of some of the uncommon and new digest algorithms:

  • Customizable output size of the 3 Skein internal storage variants
  • Customizable "personalization" value of Skein
  • Custom number of rounds and output size for Tiger
  • ...
This is even without the notions of composing digests in various fashions, such as in parallel or in a Feistel scheme.

In light of this, I anticipate changing my mechanism for obtaining and enumerating digest implementations. Changes will likely include moving the enumeration of digests to more of a secondary feature, making the move to a set of 'well-defined' digest identifiers to be mapped from strings, and making way for parameterized construction of digests to accommodate more complex notions, including hash-based MACs/etc.

The change will not be without complication, however in light of analyzing the problem and the Botan library, I think I may be able to make some elegant structures possible for dealing with complex algorithms... at least with the Lua engine. An example set of structures could be:

-- Simple sized sha2 filter
x = Filter(SHA2(256))
-- HMAC
x = Filter(HMAC(SHA2(256),"KEY")
-- Complex chain of hashes
x = Filter(Parallel(SHA2(512), Skein(512,1024,"Personalization")))
-- Take the filter and stream file-to-file using ltn12
ltn12.pump.all(
  ltn12.source.file("SOURCEFILE"),
  ltn12.sink.chain(x, ltn12.sink.file("SOURCEFILE.hash")))

Tuesday, June 7, 2011

Review of "SQL Pocket Guide" by Jonathan Gennick

SQL Pocket GuideSQL Pocket Guide by Jonathan Gennick
My rating: 4 of 5 stars

The complexity of developing database queries with SQL is a challenge often requiring frequent documentation searches. The "SQL Pocket Guide" by Jonathan Gennick is a great converged reference for many common database implementation.

The best feature of this guide is its breadth of detail offered. It provides a high-level view of database structures and provides useful details for taking strategies available in one implementation and possibly using it in another database engine. An example of this are the references from custom database function naming of Oracle's "analytic functions" and DB2's "OLAP functions" to the standard's name of "windowing functions". This allows you to take the naming you are familiar with, have been taught, or overheard and refer to it using that name and finding an appropriate redirection.

If you find that you are working with many different databases or want a quick reference to see if a given structure is available in a given database implementation, this guide is for you. Need a list of common data types for a category of data type: this guide has it. Need the details on dealing with times and dates: this guide has a good 20 pages on it. Even if it may not have all the tiny details you may need on a given topic, it can be a compass for finding your way through detailed documentation to what you want to find out.

The eBook format of this book was provided free through O'Reilly's Blogger Review program, you can purchase the book from the O'Reilly book store at: http://oreilly.com/catalog/0636920013471

You can support this blog by purchasing the book through Amazon at: SQL Pocket Guide (Pocket Guides)

View all my GoodReads reviews

Tuesday, April 19, 2011

GnuPG Key Plan

After much reading and analyzing the issue of re-keying (again!). I've come up with a plan for my GnuPG key security to help put some predictability in my key management.

  • ~week - 2011/04/20-2011/04/22 Generate and put in place an RSA 3072-bit primary keypair.
  • ~month - 2011/04/20-2011/04/30 Generate ECDSA NIST P-384 and ECDSA NIST P-521 primary keys
  • ~19 years - 2030 Migrate to new ECDSA keys and revoke the RSA 3072-bit key marking it superceded.

I will put in active use my 3072-bit keypair for compatibility reasons and the fact that it should be adequate, per NIST SP800-57 Part 1, for beyond 2030. To better future-proof in case of a strong attack against RSA that compromises 3072-bit keys, I will put into practice key-signing where possible my ECDSA P-384 and P-521 keypairs. I have chosen to use 2 additional keypairs in order to better prepare for possible stronger attacks in the future. ECDSA for OpenPGP is currently an IETF draft in its 8th draft (last 7 drafts not touching ECDSA definitions)

Management of Multiple Primary Keys

With regards to key management, I intend to avoid publishing the P-384 and P-521 keys for the fact that I do not know which I will be using in the future and do not want to clutter the keyservers with keys that I will not be using in the near future. I will test key signing with at least the released version of GnuPG (2.0 series) to see if it can sign ECC keys (even though it will not likely be able to do anything with them)... hopefully this will reveal that the key management process is somewhat blind to key contents. Assuming no better mechanism arrives by the year 2030, I will probably transition to either the P-384 or P-521 ECDSA key. I intend to follow the NIST publications to watch for adjustments to the guidelines on RSA key usage for signatures.

Key Signing Policy

I will also write a key signing policy to be attached to signatures issued by my keys. One feature that I hope to be able to write in is a reasonable mechanism to commute the signature to another of their keys with a lesser level of authentication than initial key signing. Given that obtaining key signature authentication is quite a challenge in Indiana, I want to make it practical to walk keys forward instead of an extra "ordeal" to get keys resigned. I intend to read published key signing policies to come up with one that provides good practices with practicality.

Key Publication

All of my keys will be published on my primary web site at http://www.eharning.us/gpg/. My active keys will be published to the key servers through hkp://keys.gnupg.org and also independently to the PGP Global Directory. I intend to keep the naming the same beyond moves to other web page management systems for uniformity. I will also publish the key signing policy underneath that (eventually at http://www.eharning.us/gpg/key-signing-policy/). I will be signing the markdown content generating the key-signing-policy and hopefully publishing it linked directly in the web site for verification. What good is a policy if you can't verify who wrote it!

Friday, April 15, 2011

Key backup for the paranoid

When doing this GPG key work, I realized that if by some chance drives failed and USB keys no longer worked right, I'd be unable to access tons of data backed up encrypted with them. I wouldn't want to backup the GPG keys in the same way as my other data due to the sensitivity.

Some conventional methods of backup that can be used as a "standard" item to handle recovery:

  • CD/DVD backup
  • USB Flash Drive
  • Hard Drive
Hard drive backup is somewhat out of the question as using an entire hard drive for backup and putting in a safe is out of my budget. USB flash drives paired with optical backup work great as a "fast" recovery mechanism in case something goes wrong. The problem with these is that testing them as backups is somewhat tedious.

Here's where paper comes in. You print out your key data in some form that can be input back into the computer in a reasonable manner. Now this may seem backwards, but it is quite useful for backup. You can easily print say, 10 copies of the key, slip them in something to reduce air exposure, and put them in safe deposit boxes / home safe / etc. For recovery check, you can visually inspect them to make sure that they are not degrading. To test thoroughly, you can take a small sample (ex: 1 from each backup location) and attempt a recovery. Redundancy with paper backup is quite trivial.

Now... how do you put the key on paper. You could print it in hex or base64-encode it and OCR or manually type it in... but that is tedious and error-prone. For high-density machine-readable data storage on paper you can use 2D barcodes. These typically have error-codes builtin to help manage slight flaws in the paper or scan. A good article at Coding Horror, The Paper Data Storage Option illustrates some mechanisms for paper backup. I tried stuffing backup data into QR codes and datamatrix blocks, but found that the encoding/decoding software available were quite finicky. Another problem with these formats was that they are not intended to store bulk data. There must be some other good way of encoding data for printing and recovering...

In steps in the mentioned Windows application PaperBack by Oleh Yuschuk. This tool takes an input file and prints out dotted pages arranged in blocks to store your data. It provides redundancy through duplicating blocks and arranging throughout the pages as well as error-checking codes within each individual block.

I also tried the Twibright Optar paper backup mechanism, but ran into the problem that it expected a high quality printing medium and I found it hard to tweak it to work with my readily available inkjet. Of course for final usage, laser printing would likely yield high enough quality to meet my needs.

The two backup mechanisms have their pros and cons.

PaperBack
Pros:

  • Easy to use - builtin printing and scanning capabilities
  • Visual display of the quality of the scan, permitting visual detection of where your backup may be degrading
  • Easy customization of DPI, dot-size, redundancy
  • Good reliability even in the face of injected errors through gimp
Cons:
  • Windows only - right now
  • Imports/Exports only as bitmap if not using direct print/scan

Twibright Optar
Pros:

  • Linux-compatible
  • Provided documentation describes the theory of how the encoding works well
  • Better data density
  • PNG input support
Cons:
  • Needs hand-edited code to change dot-size
  • Seems to be very sensitive to artificially introduced errors through whole-image noise/damage/rotation

I selected PaperBack as my primary backup mechanism since it met my needs and provided an easy way to tell how "bad" my backup was. I took my GPG secret key blob, sent it to PaperBack for handling and was able to pack on less than a single sheet of paper with the lowest DPI (80) and a redundancy of 10 block copies, and AES encryption of the entire data blob (beyond what GPG protects the key data with). Recovering was fast, however showed errors right from the start. These errors were quite recoverable and well within the range were errors could be recovered from. Seeing how useful paper/image-based data transfer is, I intend on using a QR or datamatrix code that has my public key information (ID/thumbprint) to help make key signing & simple detection of what key should be used for verification/encrypted-email.

With a paper copy in protected locations, I can be even more certain that the lifetime of my GPG keys will be beyond the lifetime of the machine in which they came from. The sort of thing becomes quite important when protecting information that may by chance or specifically on purpose outlive you.

8192-bit GPG Certification Key - Why and How

I generated my primary key, the certification key with an 8192-bit RSA key to help ensure that it lasts for quite some time. I figure that this should prove adequate until ECC is integrated into the OpenPGP specification and a majority of applications start supporting it. I keep this key very well protected and excluded from the set of private keys I use regularly.

Now for the how. GPG does not permit generation of 8192-bit keys normally. I found something somewhere hinting at using the batch mode of GPG key generation, but do not recall where. An example command-set that will get you a 8192-bit RSA signing key:

gpg --batch --gen-key <<EOF
Key-Type: RSA
Key-Length: 8192
Name-Real: ME
Name-Comment: COMMENT
Name-Email: EMAIL
Passphrase: PASSWORD
EOF

To segregate my certification key from the other used keys, I exported the secret portion of the subkeys using --export-secret-subkeys, wiping out the overall key, then re-importing the 'subkey' file. One downside to this sort of mechanism is that it isn't quite OpenPGP compliant and other tools such as APG cannot use the subkeys file (they complain about the missing primary key).

Tuesday, April 12, 2011

Venture Back to GnuPG and my GPG Key

GPG KEY:
sec#  8192R/B7CE5252 2011-04-11
    Key fingerprint = 5359 D88D 11DB 6981 C92E  A723 023C 6BB2 B7CE 5252
uid                  Thomas Harning Jr 
ssb   2048R/7B0654AB 2011-04-11 (email signing)
ssb   2048R/72F567FF 2011-04-11 (email/file decryption)
ssb   4096R/97E7681D 2011-04-11 (codesigning/etc)

I've ventured back in the the realm of GPG with its web-of-trust and easy file signing/encryption. I was prompted to do this when I realized I had no good long-term cryptography solution for dealing with documents that I want to be protected and be available in the future, even if my safe drives fail.


In my scheme I planned to have the following sort of key structure:
  • Root Protected Key - large key and stored off-disk
    • Machine Keys - each machine gets its own keys to manage for encryption/decryption

The problem I realized with this is that in order to do email encryption/signing, I may have to go to a specific machine to recover the data.  There is also the problem that multiple keys complicate managing trust.

When working through GPG's features, I realized I could have a similar structure without multiple independent GPG keys... GPG has the concept of subkeys that lets me do what I want with a centrally managed identity. A short little dance lets me setup a root protected key that is not on the system disk, but instead on an encrypted-removable drive. You could also do this with a hardware token, but currently keysize is limited and a hardware token has more value if you operate in a less-trusted environment or may potentially lose it.

My single key is setup as follows:

  • Master key non-expiring 8192-bit RSA key with Certifier and Signing capabilities
    Stored on a hardware encrypted drive (with copies of "day-to-day" keys) and only when in a "safe" environment (ex: Linux LiveCD)
  • 2048-bit Email/File signing key expiring in a few years
  • 2048-bit Email/File decryption key expiring in a few years
  • 4096-bit Signing key - for uses such as software signing, expiring after email keys

Request: Sign my GPG key, any measures you feel good for proving my identity, let me know. Depending on who you are, I'll try to figure out what sort of proof I'd need to cross-sign your key.

This Year's Fiction Book Plan

For one of my New Year's resolutions, I stated that I would write and publish a book this year. I believe that I may have finally come up with my subject matter. Watching Netflix movies / television episodes and listening to podcasts, I found a good working "environment" that I can write a story in.
The following bullet-points illustrate some items to work in:
  • Science-Fiction Adventure w/ Mystery elements
  • Team of agents (5-6?)
  • Fantasy mix-in with extra-terrestrial technology/creatures
  • Possible dimension/time-travel/vision/manipulation
I have ideas for how to get this put together, such as starting it off in the middle of the action with a new (or soon-to-be) agent.  He/she would then act as an audience surrogate working to unveil the status quo and start the "real" bits of action.
Depending on how the plot rolls out, I intend to at least lay out a world in which additional stories can be told, or better yet, have the potential to be a set of serial stories.

Any ideas on what can be mixed in, character ideas, or mythologies to incorporate (ex: Cthulu, Egyptian, Doctor Who, etc), are VERY welcome.

If anyone has any editing experience or knows of any "budget/free" science-fiction editors, let me know, since if I think this is quality, I'd like to polish it without relying on my skewed personal opinion.

Thursday, February 10, 2011

"Small" O'Reilly Book/Video Wish List

Here's a short wish list of O'Reilly books and videos that I found that would be great... both for professional work, as well as personal enjoyment for developing my own software :)



Sunday, February 6, 2011

Review of "The Art of Concurrency" by Clay Breshears

The Art of Concurrency: A Thread Monkey's Guide to Writing Parallel ApplicationsThe Art of Concurrency: A Thread Monkey's Guide to Writing Parallel Applications by Clay Breshears
My rating: 4 of 5 stars

With CPUs growing in power by adding additional core as opposed to just getting “faster”, learning how to take advantage of parallel programming is a must. The book “The Art of Concurrency: A Thread Monkey's Guide to Writing Parallel Applications” by Clay Breshears works great as a reference and guide for determining when parallelization may be possible, how it could be done, and what to look out for.

The book introduces the reader to parallel programming with a set of useful rules and guidelines to follow to plan for optimizing algorithms by distributing workloads through concurrent programming. Much of the remainder of the book enumerates some common tasks and how to make them concurrent. One of the best parts of the common task listing is the scorecard for evaluating the quality of the implementation. The scorecard includes the useful performance factors of “efficiency” and “scalability”. It also includes the important details of “simplicity” and “portability”, important when evaluating methods for maintainable code.

The common threading tools OpenMP, Intel Thread Building Blocks, and POSIX threads are described in the early chapters and sprinkled throughout the examples in a useful manner, providing exposure to different ways one might implement concurency; not everyone needs to re-invent the wheel when optimizing tasks.

The eBook format of this book was provided free through O'Reilly's Blogger Review program, you can purchase the book from the O'Reilly book store at: http://oreilly.com/catalog/9780596521547

You can support this blog by purchasing the book through Amazon at:

View all my GoodReads reviews

Saturday, January 22, 2011

Review of "The Nexus" book

The Nexus by Richard Fazio
My rating: 3 of 5 stars

The Nexus was my first GoodReads "FirstRead" free book. It's a science-fiction novel set in contemporary New York, speckled with bits of metaphysics, conspiracy, and danger.

The main character, Balthazar Sykes, embarks on a personal quest to discover what is going on with his mind, leading him to discover how he became the way he his while building stronger relationships. The quest is experienced through the eyes of many characters. The antagonists' point-of-view is revealed in a few segments, quite effectively giving the shadowy insights that tease the reader until resolved later.

A few of the characters such as Sykes's love interest, Alex, and co-worker Madge, develop to be well-rounded. Others do not get quite the same development time out of necessity: helpful side-characters for lack of importance; antagonists to avoid ruining the suspense.

The story's epilogue works quite well in closing the few open ends left. It is just the right length of story, both in terms of book size and timelines. You get just enough in terms of introducing characters as events begin to unfold, and not too much after the resolution, just bits of closure regarding relationships.

View all my GoodReads reviews

Thursday, January 20, 2011

LuaJSON Roadmap

My LuaJSON project (hosted at GitHub) has slowed down in development over time as new features are hard to implement when the problem is so well defined. I do have a few plans for LuaJSON, however.

First priorities:

  • Figure out how to do nil round-tripping safely
  • Create validation tests for the enhanced error output in the latest-and-greatest code
  • Make sure it works with Lua 5.2
Future items:
  • Prepare LuaJSON for inclusion in the Gentoo package database
  • Construct small C Lua extension to offer faster encoding options
  • Construct small C Lua extension to offer faster decoding options 
If there are any ideas for enhancements or new-found-bugs, please don't hesitate to post them on my LuaJSON issue-tracker or here.

Wednesday, January 19, 2011

Lumina and the Future of luaevent

My long-term goal of producing the luaevent replacement, Lumina (part of the "ehrCom" parent project), has been put off for quite some time. Given my recent time constraints for projects, I estimate that my time available to work on this re-engineering project will make it take quite a while. Given the vastness of the project and the intention to document the design before implementation make it far from usable in the near-term.

In the mean-time, luaevent is currently a great way to get fast event-based socket programming in Lua right now. Matthew Wild of the Prosody team had constructed a fork of luaevent a while back with patches he applied to luaevent when I didn't have the time to review and apply them to the main tree. I recently reviewed and applied the changes to produce new 0.3.0 and 0.3.1 releases. The 0.3.0 was missing some of the latest updates, so a 0.3.1 bugfix release was made.

I intend to keep luaevent up-to-date with any provided patches and review them in a reasonable period. That way the need for the forked version can go away and the original tree can be used.

The next major enhancements that I foresee luaevent having are:

  • Enhanced build tool integration
    • Autotools for Linux
    • CMake for Linux and Windows
  • Review of implementation to see if it can be better managed using new techniques learned from other Lua projects
  • Mirrored API using libev