Tuesday, March 29, 2016

Almost There!

YEAAAAAAAAAAAAAH!
I think I got the bootloader working. That was . . . much more complicated than it should have been. Of course, I still am not done with that USB stick. Today, there were so many failed attempts to get the bootloader to function that I never thought I would figure out what would make it work. I first just dumped the image onto the flash drive and tried grub-install (GRUB is the bootloader), but of course that didn't work! Then I tried creating an EFI (boot protocol that newer computers use) partition . . . and ended up with multiple conflicting partition tables. At that point I had no idea what was going on - which one was the real one? Determined to figure this out, I kept issuing random commands until . . . GRUB Menu! YEAH! I still don't know how I did it.

After a little more work I managed to get it to boot the kernel (the core of the operating system). Shortly after my success, I physically broke the flash drive. UGH!!!!! This is not very surprising, considering how things in this process have been going. My little laptop had tilted sideways and bent the connector. In order to get the broken flash drive to work, I had to keep pressing down on the stick to hold it at a precise angle in order to maintain electrical contact. Imagine doing that while frantically trying to delete files because my big laptop was running out of disk space (I had to free 5 GB) for the copy operation. Finally, it copied onto my computer and I was able to set it up on a new working flash drive. Now I just need it to be able to open a terminal on the screen of my little laptop . . . I am almost there! The end is in sight!


Tech Stuff: Creating a Personal Cluster Part 2 - Designing a Network

Most of you will probably make a simple tree network - just attaching them to a network switch, which may in turn be connected to another network switch, etc. This method is extremely simple and is great if you do not have much internal traffic and your cluster is relatively small. Descent 100 Mbps 5-port switches are available online for $10, such as this. You should pay attention to the overall switch capacity though - some switches may not be able to run at top speed. The one which I provided a link to has a switching capacity of 1.6Gbps, so it can run at full speed (if they aren't lying/using unrealistic conditions). You can just connect a few of those to another to accommodate all of your nodes (assuming 16 nodes).

It is also possible for the more adventurous to create an OpenFlow software-defined network. These networks scale efficiently and can handle ludicrous amounts of bandwidth, but are overkill for most small clusters. These may be especially helpful if you want to use your cluster as a testbed for datacenter applications. In order to deploy SDN, you will need to add USB ethernet ports. Or, with enough work, you could theoretically run all of the networking over USB.
#slice2016

Monday, March 28, 2016

Buildroot Finally. . . Almost Builds the Root

Buildroot finally built most of the root! At the end of yesterday's work, it would not build the ISO image. So I did a git pull - downloaded the files from our online code storage - today and tried again. Something corrupted the cpupower module - the component I had the make problem with - even more. My fix would not work anymore, so I disabled cpupower and it kept going. Buildroot finished compiling, but the ISO still would not build.

I decided that I am going to set up my Buildroot compilation on a flash drive manually. Hopefully the bootloader will not become compromised again, as I have failed at setting up bootloaders on USB sticks before. Either way, I am going to keep trying things until it works. Minimally, I want the system to work before our part of the robotics team meets on Wednesday; ideally I want it to work today!


Tech Stuff: Creating a Personal Cluster Part 1 - Selecting SBCs and Storage

In my previous post, I do not believe that I explained everything in such a way as to allow one to plan a personal cluster. In this series of post, I will explain everything required to plan, design, construct, setup, and maintain your own cluster based off of single-board-computers. This explanation will be divided into these parts:
  1. Selecting SBCs
  2. Selecting Storage
  3. Designing a Network
  4. Power Supply
  5. Selecting Management Systems
    1. Mass SSH
    2. Docker
    3. KVM
  6. Storage Management Systems
    1. GlusterFS
    2. LVM
  7. Designing an Organizer
  8. Using Buildroot to Create System Images
    1. Minimizing Attack Surface
  9. Setting up Firewalls
    1. IPTables
    2. HTTP proxies
  10. Assembling the Cluster
  11. Debugging and Testing the Cluster
  12. Deploying Software on the Cluster
  13. Adding on to the Cluster
Selecting SBCs
First, figure out how much RAM you need. Do you need 512 MB per node, 1 GB per node, or 2 GB per node? Or do you need a mix? After that has been decided, you should figure out your networking needs. After these have been determined, you should select a board with a good amount of CPU. It is important to note that the clock frequency and core count do not matter as much as the core type. For example, an ARM Cortex-A17 quad core at 1GHz is about equal to a quad Cortex-A7 at 1.5GHz for many tasks. However, Cortex-A7 has higher energy efficiency and is generally cheaper. Quad ARM Cortex-A7s are probably the most efficient available configuration in terms of CPU cost and energy efficiency for ARM SBCs. These include the Raspberry Pi 2, Orange Pi line, and similar. For 1 GB per node, Orange Pi PC is probably your best bet for moderate-network-traffic operations. If you can squeeze down to 512 MB per node, you can get similar performance for even cheaper. Also, you may want to carefully modify the clock-speeds of the SBCs to fit their environments and workloads. For example, bursty tasks should have a lower base clock and higher burst clock (with a lower temp threshold). You may want to test this on an individual board before using these settings on the entire cluster for safety reasons.

Selecting Storage
You must also determine your medium for data storage. I mainly suggest NAS units or USB hard drives. If some of your SBCs have USB 3.0, you can use a USB-to-SATA adapter to attach an SSD for quick access files. But if your tasks are more like archiving, you may want to just plug in inexpensive backup drives to USB 2.0. This will be slower, but will achieve higher storage capacity at a lower cost. The storage can easily become the more expensive than the compute nodes, so you should carefully think over all aspects of your decision. If you are performing mixed tasks, you may want to combine high-speed-low-capacity and low-speed-high-capacity storage mediums to achieve optimal speed and capacity. You should also use the RAM of your compute nodes to cache the storage devices for optimal performance.
#slice2016

Sunday, March 27, 2016

Buildroot Resumes. . .

Buildroot is STILL building the root. It crashed last night because of a bug in a make script in the Linux kernel. While attempting to get the compilation to resume, I first tried re-running. That didn't work. Then I noticed one of the error messages said that it was missing a library, and googled it. The first search turned up only 8 results, none of which had anything to do with my issue. So I tried shortening the search term.

The second search came up with one related search result (all the others were garbage), which said to check the make script. When I opened the script, the compiler option was surprisingly absent. It is strange how nobody previously realized that it wouldn't compile when testing the code. Anyway, I inserted the compiler option and Buildroot resumed.
Still waiting . . .
     hoping . . .
          praying . . .
               crossing my fingers . . .
                     you name it, I'm doing it . . .
I really want to get going on transferring the robotics code to our flash drive so that we are not dependent on using one particular computer.
You never know what will happen in life, as this project has shown,
        so it is always good to have a back-up plan.

Bonus Tech Stuff: Personal Cluster Processing

With SBC's (single board computers) available for cheap, it has become incredibly inexpensive to deploy personal clusters. In this post, I will try to explain some of the necessary choices for constructing a cluster, and what you may want to use it for.

What can I use a Cluster for?
Personal clusters can be used for highly parallel tasks that may take a long time to complete on a personal computer. One that I need badly, and other people also probably need, is compilation of large source trees. With a powerful cluster running distcc, you could theoretically compile the Linux kernel in a minute. For people who do not compile that much, you could use it for simpler tasks such as web serving, http caching, a noSQL server, or high-capacity file storage. Or, you could donate processing time to a BOINC-based crowd processing project such as VirtualLHC@Home.

What Nodes (Computers) can I use?
You may use any device that you want that runs Linux. It is often appealing to set up a cluster with old out-of-use computers that you may have lying around. However, if you are going to be serious about your cluster, the energy consumption of the old computers may outweigh the cost of new nodes in the long run. I suggest using cheap single-board-computers for building a cluster because they are cheap, low-energy and long-lasting. You should select an SBC based on your needs. For general purpose, an Orange Pi PC or Orange Pi One would be your most cost effective. However, if your tasks involve a lot of RAM or are disk intensive, you may want to consider using an ODROID-XU4 with a USB-connected SSD or similar. For some people, a mix of boards could be best.

What Software can I use?
For some people, stock OS images and SSH will suffice. However, I suggest using Docker on Buildroot for more advanced users. This will allow you to select whatever Linux software you like, and will free up as much of the RAM and processing power as possible. Buildroot has support for most popular SBCs, and allows you to generate a minimalist image to run Docker on. Removing unnecessary software will increase processing capacity, decrease attack surface, and significantly reduce the chances of unexpected crashes. Docker will allow you to quickly deploy any Linux userspace software stack you want.

What can I do to Keep this Organized?

If possible, I would suggest building a slide-in mounting system with proper wire routing. However, some people just tend to make everything messy. In order to mitigate this, I have a few simple tips:
  1. Label all wires
  2. Color code whenever possible
  3. Use wire routing whenever possible
  4. Avoid floating boards and routers - if one wire slips, the whole thing could come crashing down
  5. Whether or not you do the above, you should check that you know where every wire goes before bringing the system online
If you are only using a few nodes, 1-4 may not make sense. But most important is number 5. It may save hours of debugging because a networking switch was unplugged from power or something similar.
#slice2016

Saturday, March 26, 2016

Buildroot Won't Build the Root!

Ugh, Buildroot won't build the root. Yesterday, I tried to set up a Buildroot system for robotics during our 9 a.m. - 6 p.m. meeting. The compilation would have taken forever on my laptop, so I tried it on my desktop through SSH. It took me until today, though, to figure out why it was not working. Apparently last time I tried to fix my desktop graphics, I broke a lot more than my graphics drivers. When I tried installing a dependency for compiling Buildroot, I was surprised to find my system in dependency hell.

To get out of this, I attempted to reinstall my operating system. This time I wanted to use Debian Stretch, because it had AMDGPU available (see previous post). However, the Stretch installer didn't work, so I instead installed Debian Jessie. That worked (slow graphics though. . .) and I was able to start the Buildroot compilation process. Hopefully it works this time. I am writing this while I am waiting, so the final outcome is yet to be determined.

Bonus Tech Stuff: Gaming Console Emulators on Linux

There are many open source console emulators available - both for newer (Wii) and older (Atari) consoles. For most older gaming systems, you can use RetroArch. RetroArch is an emulator framework for classic gaming consoles. There are RetroArch emulators for PSX, SNES, GBA, Sega Genesis, Atari 2600, and many other systems. There is a Linux distro called Lakka which acts as an easy-to-use front-end for RetroArch.

Wii and GameCube fans can use Dolphin - a high-speed, highly featured emulator. Wii and GameCube games are not actual system binaries - they are actually in an interpretive bytecode language. Dolphin dynamically compiles this bytecode into system code, allowing it to take advantage of high-performance hardware. The graphics are handled through OpenGL, so you can get better graphics than you would get on the actual console (HD/4K if you get a high-res texture mod). Dolphin also works with 84.4% of tested games, so you can run most games.

#slice2016

Friday, March 25, 2016

My Favorite Linux-Compatible Game: Robocraft

Earlier today, I was playing Robocraft. Fortunately it was not having connection errors, so I was able to finish many rounds. Using my awesome robot and a small glitch in the Unity physics engine, I was able to lift an enemy into the air. A second later the enemy exploded midair - blocks flying in every direction. Because of situations like this, I was able to gain points and help my team to victory against enemies which were much higher leveled.

During my first battle with my "lawnmower" robot (see below), I lost because I kept getting blasted out of the sky before I could deal any damage. However, I did earn a large sum of game money from these battles and, after selling all of my other robots to combine my funds, was able to upgrade significantly. This massive upgrade improved my robot enough to make a big difference in my gaming ability. So, in the next game my robot destroyed several enemy robots which helped my team to victory. The next few games provided me with opportunities to further improve my skills. It feels great to finally become proficient at my favorite video game!

Robocraft is a unity-based Steam game where you build and fight robots. It is currently in development, and is rapidly improving. This game requires design skill in order to create an efficient destroying machine. As you deal damage and scan with radar, your overclock gradually increases. Once the bar is full, you overclock level up. Each overclock level increases attack rate, damage, speed, and material strength.

In Robocraft, there are 8 types of weapons: Laser, Plasma Launcher, Rail Cannon, Nanotech Disruptor, Tesla Blade, Proto Seeker, Aeroflak Cannon, and Lock-on Missile Launcher. The most basic is the laser - which shoots low damage zero gravity projectiles with a relatively high firing rate. Also available relatively early on is the Plasma Launcher - which shoots higher damage projectiles relatively fast. This weapon is not overpowered because of a game mechanic called energy. All weapons consume energy and energy regenerates at a constant rate. Later in the game, aircraft are haunted by the Rail Cannon - a long-range high-accuracy weapon with a low firing rate. A single shot from one of these can blast a plane out of the air. Also important a little bit later is the Nanotech Disruptor - which allows you to heal members of your team. My current favorite - the tesla blade - is the only melee weapon in the game. Proto Seekers and Aeroflak Cannons can auto target and follow enemies, but only if they are detected on radar.

My "Lawnmower"
For my robot, I use a design which classifies as a "lawnmower." It is a plane with a row of five triple tesla blades on the front. If I fly into an enemy, it does fifteen impacts of 47400 damage in quick succession - enough to obliterate an enemy. However, this type of design has significant drawbacks. It is incredibly brittle and can be destroyed in a few shots. When I miss, my target typically destroys me in a few seconds. Also, my robot can be destroyed before it even reaches the target. Being especially vulnerable to powerful weapons, a few successful shots will render it useless or destroy it. I mitigate these risks by using high-level radar jammers and radar equipment to scan for potential targets.

Robocraft is an exciting and challenging game.  If you have never played it, you may want  to consider checking it out over break.
#slice2016

Thursday, March 24, 2016

Break has Arrived!

Break has arrived! As soon as I came in the door and dumped my books on the floor, I headed straight to the Wii to start playing Super Smash Bros. Brawl. It is great to be able to take a break and play games without high level thinking. I still find it hilarious that I can often win in a 5-life battle against a max-level computer player without paying attention. It is difficult to recall the last time I have played more than a few rounds of Brawl during the week.

After Brawl, it was on to StarCraft II. When I previously played this game on my big laptop it was incredibly slow because of my Windows setup. So I decided to install it on my Linux system through PlayOnLinux. The install worked perfectly, however when I tried to launch it, the launcher was distorted (see picture). This was not because of PlayOnLinux - it was because my graphics driver was out of date. Updating the graphics driver should be easy once I get around to it. For right now, I just want to chill, not think too much, and enjoy the first day of break. That is why I am ending this blog post, and going to watch TV.




Bonus Tech Stuff: PlayOnLinux - Easily Running Windows Games (and other software) on Linux

PlayOnLinux is a system that simplifies the installation of Windows programs. It uses WINE. PlayOnLinux has configurations for running various games (and of course other stuff) that are Windows-only. It automatically installs the appropriate WINE version and sets up a WINE configuration which has been known to work with that program. For some games, it can even automatically fetch the installer.

PlayOnLinux runs separate WINE instances for each program in order to prevent them from clashing and manage different configuration needs. These are called "Virtual Drives." To make it easier to use, PlayOnLinux has both a command line and graphical interface. It is written in Python. There are Python scripts for each game which configure WINE to work with them.
#slice2016

Wednesday, March 23, 2016

Robotics Meeting . . . Where Nothing Happened

When I arrived at robotics club, there was practically nobody there. Tien (the leader of the group) and Matt (who wrote the top side laptop code) were looking at the finally completed frame of the robot. It looked pretty impressive. In preparation for the next steps, we went into the storage closet to look through all of the robotics materials from previous years. The closet can best be described as organized chaos. After scoping out the many items to choose from, we decided that all we needed was PVC pipe - and a lot of it. We filled a large box with a ludicrous amount of PVC pipe and hauled it into the Tech Lab.

After that, we hauled all of the robotics stuff out to Tien's minivan. We are bringing our robot and all of our supplies to Tien's house so we can hopefully finish building it over the break. Out by the minivan, we reviewed the plan for how we were going to accomplish this. Before we knew it, we had finished packing everything somewhat neatly into his car. The weather was exceptionally nice - sunny and warm with a slight breeze - so it felt good to be outside unlike previous weeks when we were freezing. It finally seems that all of the work we are doing on the robot will come to fruition soon.

Bonus Tech Stuff: LIGHTTPD - A Lightweight Web Server

LIGHTTPD is a lightweight web server. It supports common features - SSL/TLS, HTTP compression, etc. Most importantly, it is lightweight - taking up about 1 MB under light load. This makes it efficient for embedded systems. It is also often used on single board computers such as the Raspberry Pi.

For more serious applications, it has support for load balancing. It can handle several hundred requests per second on personal hardware, and can go much faster on a typical server. LIGHTTPD has also been used for serving larger files - it is used by some package repository mirrors. This could also be used for high capacity serving from a cluster of minimalist nodes in order to reduce cost. I may consider using this for an upcoming project.
#slice2016

Tuesday, March 22, 2016

The TV and the Beam Gap


Earlier today I decided that it was about time for me to put my TV up on my wall above my desk. It had been sitting on blue carpet of my room, right next to my window ever since we moved in to our house. The main reason I wanted the TV up was not to watch shows or movies, but to use it as a giant computer monitor. That way I could play computer games like Robocraft on a much bigger screen. It would also allow me to put terminals on a separate screen. This way, I would not have to pull them up and switch back to Chromium when working on a complicated project or when working on multiple tasks at the same time.

I enlisted the help of my parents as well as my cat Feisty. The first thing we needed to do was move my desk out of the way. My parents and I did this while Feisty sat on the bed and supervised. I then used the stud-finder to look for the beams and mark them so that the thick long metal screws which bear the weight of the TV and the mount could be securely attached to the wall. While trying to figure out the exact position for the mount, my mom double checked my markings at the height at which we needed to drill pilot holes, but couldn't find one of the beams. I took the stud-finder to try it again and found the beam at the place I had marked. However, when I moved the stud-finder up the wall, it stopped detecting the beam. Even stranger, when I continued to move the stud-finder further up, it detected the beam again - there was a gap! If we had put the TV up, the beam we attached it to could easily have fallen, tearing a chunk out of the wall. Because it was unsafe, we temporarily abandoned our attempt to put up the TV until we figure out both the mystery of the severed beam, as well as a safe way to hang the TV.

Bonus Tech Stuff: OpenWRT - An Embedded Linux Distribution

OpenWRT is a lightweight embedded Linux distribution which is available on some routers. It uses the opkg package manager, and has 3500 available packages. The current main version is 15.05 Chaos Calmer. It is simple but highly customizable, and uses OverlayFS to overlay SquashFS and JFFS2. OpenWRT also has several available web interfaces if you do not want to use SSH. It can run on relatively minimal hardware.


Because of that, extremely cheap portable options are available from Alibaba and Aliexpress. These have a reasonable amount of RAM, but have very little flash. Products such as this (I don't have one of these - you might consider payment protection if you want to get a bunch) could be used for simplistic tasks such as downloading a large file to an external hard drive. It is also probably possible to use these for a mesh network or acting as a web cache with an external hard drive. OpenWRT routers could be fun for projects!

#slice2016

Monday, March 21, 2016

I am Jesus (in the Passion reading)

Yesterday the youth choir started off the Palm Sunday service in the church hall by singing Siyahamba. This South African song is sung mostly in Zulu. For those like me who do not know African languages, most of the lyrics in the song translate to, "We are marching in the light of God," where the title is the, "We are marching," part. Our version of the song consists of three verses - first in Zulu, then English, then back to Zulu. However, even if we had not sung the middle verse, most people would be able to perceive that it is an uplifting song based on the melody. When it was over, we processed out with the adult choir, and the entire congregation, holding palms, into the church.

Once in the church, I had a few minutes before making my way up to the altar for the Passion reading. I was selected to read the part of Jesus because I apparently bear a strong resemblance to the European representation - at least that was what people had jokingly said. Surprisingly I was more nervous about falling than about the large part I needed to read in front of the full church. The podium I was standing on was meant for one person, but I shared it with the narrator. Each time one of us spoke, the other needed to step to the back of the small platform, because there wasn't enough room by the microphone. So, I was more focused on avoiding falling backward down the steps than on the audience looking up at me. From this experience, I learned that fear of falling off small platforms is an effective cure for stage-fright. Luckily the passion reading went well and neither of us fell off. However, even if I did fall off, my body splain out on the wooden floor, it would have been a good preview for next Friday.

Bonus Tech Stuff: Transfer.sh: Simple Command-Line File Sharing

Transfer.sh is a website that allows for temporary online storage of files for transfer. These files can be uploaded with wget or cURL, and downloaded through https. It also provides a way to view these files online. Transfer.sh can store up to 10 GB files for 14 days. Best of all: it is free and open source.

Here is a demo upload: https://transfer.sh/dLpFr/hello.txt
That link will eventually break.
Here is a command to test it by uploading text and re-downloading it to stdout:
curl $(echo hello | curl --upload-file - https://transfer.sh/hellotest.txt)

This is relatively simple, and could be used with GPG to securely transfer files. You could also use tar and LRZIP if you wanted to bulk transfer a large number of files. They also have a virus scanner, if you have to deal with Windows things. Transfer.sh has a bunch of examples using tar and GPG. They also have examples for backing up an SQL server and sending email, although I am not quite sure how secure this is. However, this will be fun to play around with!
#slice2016

Sunday, March 20, 2016

Robotics & Urinetown

At robotics this weekend we managed to test the RS-485 devices that we are using for data transfer between the top and bottom side electronics. We were also able to upload the rest of the code to GitHub. During the previous meeting, we attempted to send data between the RS-485s using a test program, but instead of the characters we typed in, we received a stream of weird Y-like symbols. During yesterday's meeting, I tried debugging this test. First, I checked our configuration against a wiring diagram I found to ensure that we had wired it properly. Much to my surprise, the diagram showed that we needed to add some resistors to our circuit. After doing so, our system still did not function properly. To test the electrical flow, we attached an oscilloscope (detects electrical wave signals) and an LED (a light that would light up if electricity was flowing). These devices found that there was no electrical signal passing through our wires. After checking everything, I eventually realized that my communication pins were swapped. Once they were moved to their appropriate placement, the RS-485's finally started working. Robotics, like the rest of life, is often a matter of trying different strategies to solve a problem, and once solved, moving on to another.Displaying 20160331_214131-1.jpgDisplaying 20160331_214131-1.jpgDisplaying 20160331_214131-1.jpg


"Every once in a while we need to be told that our way of life is unsustainable," was the moral of HTHS PAC performance of Urinetown which took place this weekend. I was able to go to the play with a friend who is an HTHS grad.  Despite microphone problems, we could hear everybody's awesome singing. The set, props, acting . . . all of the show was amazing. Strangely enough, the blips in the show went along with the show's message - that the show's "way of life" was unsustainable. However, unlike the characters in the play, the show ultimately did survive.

Bonus Tech Stuff: Google Cloud Platform VMs

Google Cloud Platform is an easy-to-use cloud hosting system run by Google. This post is in no way an attempt to compare it to AWS - I do not have personal experience with AWS. GCP provides many options which make it efficient for variable-load processing. It provides by-the-minute pricing, pre-emptable VMs, and many other ways to make itself more cost-effective. GCP is also highly customizable.

The VMs themselves are termed "Compute Engine." Google offers many options to customize them. There are three main VM configuration types: Standard, High Memory, and High CPU. The standard configurations have 3.75 GB of RAM per CPU core, and supports 1-32 cores. The High Memory configurations have 6.5 GB/CPU and the High CPU have 0.9 GB/CPU. Both the High Memory and High CPU configurations can have 2-32 cores. All three of these require that the CPU count be a power of 2. If less than a full core's worth of processing power is needed, you can use the Shared Core configurations, which allow you to share a CPU core with other VMs. This is inexpensive, and is available either with 0.6 or 1.7 GB of RAM. For the 1.7 GB model, you are guaranteed to get half a core, whereas on the 0.6 GB model, you get whatever is available. If the requirements of a service lie uncomfortably in-between these, you may also use a custom VM. Custom VM's have a few simple rules:

  1. When using more than one core, the number of CPUs must be even
  2. You may not have more than 32 CPUs, or 16 for some locations
  3. There must be between 0.9 and 6.5 GB/core
  4. RAM is expressed in increments of 256MB
Also, all VM types except for shared core can be equipped with up to 8 375GB solid state drives.

#slice2016

Friday, March 18, 2016

The Flying ITR

Today my group went on the Monmouth Junior Science Symposium (MJSS) trip. Of the many interesting presentations, I was especially intrigued by the one on automatic identification of brain cancers. Before this presentation, I had never heard of using hyper-planes for classification. Now, as I am thinking back, I wish I would have asked the presenter why he selected that type of classification algorithm. Why did he choose to test classification by hyper-plane - why not use a decision tree or recursive neural network? I believe he had a number of options and it would be interesting to hear about his decision process for selecting that classification method.

In addition to the scientific presentations, there were other interesting parts of the day. During lunch we played Brain it On! - a challenging game where you draw shapes to solve physics puzzles. We were very focused on quickly drawing our shapes before time ran out, when suddenly someone's ITR sheet flew off a table and was blown skyward. It drifted from side to side in the air, seeming as if it was going to land on the roof. As it swooped down, brushing the concrete, I quickly pinned it to the ground with my phone. Its owner was grateful for my save, as it would have been bad if it shot back up and was lost on the Monmouth University campus forever.


Bonus Tech Stuff: GCC

GCC is a robust compiler used for most Linux code not written in an interpretive language. It has support for a wide variety of languages and CPU architectures, some of which are not mainline. Many may know that it supports C and C++, but it also has lesser known support for D, Java, Fortran, Go, Ada and VHDL. GCC is also compatible with a wide variety of CPU architectures - from the boring old x86 to the 112 data operation/clock/core FR1000 (FR-V).
#slice2016

Thursday, March 17, 2016

Dr. Agrawal's Class - With No Work to Do

I am in Dr. Agrawal's class and half of my classmates are gone for MJSS. Dr. Agrawal started the class by telling us that there was no work to do. Was he joking around with us? No work to do in math - I could hardly believe it - there are always plenty of problems to practice on . . . But Dr. Agrawal was serious and asked us what we would like to do. Somebody jokingly suggested that we go outside and, since it was a sunny day, Dr. Agrawal thought that was a good idea.

While he was walking over to the phone to ask Mr. Bals if that was allowed, Michael noticed a cart of Macs. So, just as Dr. Agrawal was about to make the call, Michael asked if we could use them. Dr. Agrawal switched focus with us - forgetting entirely about going outside and agreeing to let us use the computers. So, today in math class, not only did we not do math and almost get to go outside, but we were able to do whatever we wanted (within the parameters of the school rules) on the computers! This is probably one of Dr. Agrawal's most unusual classes so far. Even more unusual, I am writing this blog post for English class in math class right now!


Bonus Tech Stuff: OpenRISC

OpenRISC is an open source CPU architecture. OpenRISC can be 32 or 64-bit, and may have 16 or 32 general purpose registers. An OpenRISC-based cubesat called TechEdSat was launched into space in 2012.  Samsung incorporates OpenRISC processors in their DTV SoCs. OpenRISC also has a wide variety of available SIMD instructions.

There are two main implementations of OpenRISCOpenRISC 1200 and mor1kx. Mork1x is highly customizable, but is only single issue. OpenRISC 1200 is a bit faster, and can reach 1.34 CoreMarks/MHz. However, this is much lower than a typical ARM or Intel core. OpenRISC, at least with current implementations, cannot be considered to be a legitimate alternative to modern application processor architectures. However, it can be a nice example to check out if you want to learn about processor pipelines.
#slice2016

Tuesday, March 15, 2016

Robotics and the Picky Camera

My mission at the robotics meeting was to test the last of the cameras for our underwater bot. On Saturday I was only able to test three of the four cameras since we could not find the fourth. Luckily it turned out to have been at school, rather than permanently missing. The director of the club had it ready for me to check at the beginning of the meeting, so I was able to get started on the complex set-up process.

I took my little laptop, hooked it up to our control box that contains the only power supply that will work with the camera, and also connected the USB video adapter.

The video adapter is what some might call "picky" if it was human. It has very unusual drivers, so it will not work with most graphically launched video display programs. Also, we have only been able to identify two programs that work with the adapter: avconv and mplayer.

Following the physical setup, I punched in the mplayer command to have camera's image display on my screen and    .   .    .

                  "Command not found."

So, I tried avconv and    .   .    .

                   "Command not found."

Rats!!!  I didn't have either of them installed on the laptop I had with me.

I tried to install mplayer by manually downloading the .deb package, but of course forgot to download one of the recommended dependencies.

Eventually I gave up on my laptop and borrowed an old TV from the storage closet. We plugged the camera directly into it, turned it on, and waited for the the camera's image to appear     .   .    .

Although we were now able to view the camera's impressionistic colorful swirling image of the room, it was not quite the image that would help us win the MATE competition. The camera, after all of that work, was broken.  Fortunately, we have enough funds in the budget to replace it and try again. By then I will have both of the adapter programs downloaded on both of my laptops.


Bonus Tech Stuff: Squid - Speeding up Internet Access

Squid is a proxy server designed to speed up internet access by caching files. It is highly customizable and can work alongside other services in order to provide advantages - such as anti-virus, TOR, and ad blocking. This type of service can be especially helpful for environments which contain a large number of computers on one network - such as a school. Commonly accessed pages, such as the Google main page, are stored so that they only need to be re-downloaded every once in a while. This strategy provides a significant speedup, and allows a weak internet connection to be usable by a large group of people.

This proxy server can also theoretically be set up on Android phones with a tool like Linux Deploy. The cache created by Squid could potentially reduce the phone's need to use mobile data. Although the level of impact of this proxy on data usage depends on how you are using your phone, it would definitely provide a reduction in data usage. Squid is also able to forward downloads through TOR, and can even be used as an ad blocker. For those devices with more limited processor and memory resources, Polipo may be used instead. Polipo is like Squid, but designed for more resource-constrained systems.

Arch Wiki:
Squid
Polipo
#slice2016

Monday, March 14, 2016

Switching Pandora Accounts to Avoid Horrible Music Choices

Over time, it appears that Pandora was gradually playing songs less and less related to the station I had selected. My station on the family account appeared to be picking random songs for some unknown reason. It seemed almost as if the station was corrupted. Luckily, I had set up my own Pandora account a long time ago and had barely used it. So I closed proprietary Chrome and loaded up Chromium with Pepper Flash because I figured that changing as much as I could might help get me back to the station as I liked it.

I went to the Pandora website and logged into my old account. It took me two tries to figure out what my old password was, because it had been a while since I last used it. I selected "The Beatles" station and (drumroll) finally got Pandora to pick good songs! Maybe Pandora, in an effort to prevent users from becoming bored with the songs rotating through the station, has decided to add songs to the station's playlist as time goes on. In general, this could be a good idea if the songs are similar to the station's theme.  However, when they have gone as far away from the theme as they did in my case, it does not make the user want to continue listening.

Unfortunately my Pandora ad blocker stopped working, but I can tolerate the ads for now. Plus, I quickly realized that it is now easier to get to, since I didn't have to open both versions of Chrome.

Bonus Tech Stuff: Seamless VM Integration

In some cases, Windows-only programs like Adobe Photoshop can prevent users from switching to Linux. Some programs will not even run on WINE. For problems like this, you have to use Windows. But that doesn't mean you can't use Linux.

These programs can be run with seamlessly integrated virtual machines. This can be accomplished with minimal overhead using tools like Xen which allow for graphics virtualization. Versions of rdesktop newer than 2013 have support for launching individual applications. Using these tools, you can set up application launchers in the standard launch menu, making them all blend in with the rest of the Linux programs. Seamless desktop could essentially eliminate Windows-only programs as a blocking factor for the switch to Linux. This would allow the user to transition away from Windows rather than having an abrupt change. There are limitations though - it is questionable how well RDP will be able to carry an intensive 3D game. However, games can be streamed through a system built into Steam.
#slice2016

Sunday, March 13, 2016

I'm Hungry!

My homework is mostly done, and I am writing this blog post at Kicky's Restaurant. I ordered their delicious Pad Thai, while my parents ordered sushi. They, and a lot of people, really enjoy sushi, but I have no desire to try it. Raw fish and seaweed do not make me want to take a bite, despite how interesting it may look. Generally, I am more of a noodle person. I love Pad Thai, Italian pasta in almost any form, and noodle kugel - the sweet kind.

My mom's chili is my favorite food - I would choose it over anything else. She makes it with ground beef, beans, tomatoes, and spices. Her secret ingredient is a square of unsweetened chocolate. Yes, you did read it correctly - she adds chocolate to the chili and it tastes delicious. Well, you cannot actually taste the chocolate, but it does change the flavor to make it richer. Mole sauce, which is used in some Spanish main dishes, also contains chocolate as a spice. I can't wait until the next time that I come home to the delicious smell of chili coming from the crock pot in the kitchen. Just in case you had not noticed, I am REALLY hungry! I hope my food comes soo . . . it's here! Time to eat!

Bonus Tech Stuff: Virtualization

There are many high-quality virtualization systems to choose from that are compatible with Linux. If you would like to run another operating system, or another instance of the same operating system, there are many options with different drawbacks, advantages, and structures. I will mainly talk about Xen and KVM, although I will also touch upon some others.

Xen has a very unusual structure, as compared to traditional virtual machine software. Rather than running as a piece of software inside of an operating system, Xen runs the main operating system as a VM. The main advantage of Xen on a desktop is that it allows for efficiently splitting a graphics card between VMs. If you are a gamer, you can use Xen in order to get high frame-rate on games running on Linux and Windows. Overall, Xen is especially efficient in IO management.

KVM is a hypervisor built into the Linux kernel. It runs on a variety of devices, and has a more traditional structure. KVM runs the VMs inside the main operating system. One key advantage of KVM for servers is that it provides live guest migration - you can move VMs to other servers before taking a server down for maintenance. This will prevent the need for any VM downtime.

Both of these commonly used virtualization systems are fast and open-source. There are also some lesser-used alternatives such as VMWare - a closed source paid virtualization product. Another type of virtualization technology available for Linux is called containers. Containers simply allow the Linux kernel to have more than one userspace, thereby creating OS-level virtualization. Many open-source containerization systems are available, including LXC, chroot, and higher level tools like Docker. Like KVM, some of these tools also support guest migration.
#slice2016

Saturday, March 12, 2016

------- Crashed my Game!

Yesterday I was playing Minecraft with one of my friends, and he kept getting disconnected. I have a server running on my dysfunctional desktop but, since it was only at a few percent CPU usage, that could not have been the cause. Further investigation found none of our electronics were responsible for our inability to play; it was our internet provider that was interfering with our game. Their high error rate was clogging the connection. This made me wonder about why advertisements do not compare error rates, along with their comparison of speed. We have a plan with, supposedly, 150 Mbps of download speed and 12 Mbps of upload speed. However, those speeds seem only to be achieved by network testing software. I have run speed tests many times and have yet to see them reach the advertised level.

This is especially frustrating when the high error rates interfere with using my server to play online games - to the point where TCP (the connection to another computer on the internet) cannot deal with the congestion. I wonder whether they purposefully throw error rates so that the bandwidth is distributed across households, thereby decreasing the internet provider's expense. Yesterday the error rate was so high that my friend's computer couldn't maintain a basic TCP connection to my server. Giving up on any hope of playing Minecraft, we switched to Robocraft which has a much higher error tolerance. That still was not enough - the latency kept growing every battle, such that it failed before the end of each round. I decided that I will try to create a mechanism to get around this error rate boosting system. I also think that it would be interesting to have HiTech students collect error rate data at each of their homes for their respective internet providers. Having a prestigious technology high school present a research paper on this would call attention to the problem. If this research paper is picked up by the local news, as many of the achievements coming out of HiTech are, it might force the internet companies to make changes that will address it.

Bonus Tech Stuff: Using the dd Command

The dd command, AKA Copy and Convert Command, is like a disk-transfer Swiss-army knife. In its simplest form, it allows you to write over a file with data from another file. But what makes this command so useful is that it functions even when the source or destination file is some form of a drive - USB stick, hard drive, CD, tape drive, floppy, etc. For example, you could take your home partition and copy it onto a flash drive with dd if=/dev/sdXN of=/dev/sdXN, replacing X with the correct letters an N with the appropriate partition numbers. dd can also take other types of files as input, and convert them appropriately - hence the name Copy and Convert. (You may be asking why it is called dd instead of cc - it was changed to dd because cc was already taken by C Compiler.) For example, you can take a .iso file from a distro website and burn it onto a drive using dd.

dd will also read from stdin when there is no input file specified, and write to stdout when no output file is specified. This means that you can pipe through other commands, as well as to transfer a partition from one system to another. In this example, I will assume that we are copying our entire main drive from one computer to another identical computer because we do not feel like using the installer. One way we could do this is using an external drive. I will assume that we do not want to use a filesystem in order to provide a speed boost. We could use dd if=/dev/sda | gzip -f | dd of=/dev/sdb. On the other side, we would use dd if=/dev/sdb | gzip -d | dd of=/dev/sda. Or, we could use cryptcat in order to transfer it over network.
#slice2016

Thursday, March 10, 2016

Work on my Research Project: Buildroot Linux

Orange Pi Mini 2

Today I decided that I was going to work on something for my sophomore research project. For those who do not know, my research project is utilizing a large but inexpensive cluster of Orange Pi PC Single-Board-Computers (SBC) to perform tasks typically performed by $5,000+ high-performance servers. One of the tasks I need to accomplish is creating a lightweight operating system (OS) to run my tests on. Also, I would prefer not to use the OS images provided for Orange Pi, since they contain a major bug which needs to be corrected/edited to get them to function properly. I have some constraints for my selection of a platform - it must be minimal, lightweight, fast, customizable, and easily deployable.

On v86 I tried a system called Buildroot, and found it to run efficiently there. I thought that if Buildroot could run fast on v86, it should be able to run even faster on a real system. So, I pulled up the website and read about it. Buildroot is a relatively simple and straightforward tool for creating specific-purpose Linux systems. It allows the user to select the software to download, as well as the compiling configurations, so that s/he can get exactly what s/he wants with no extra packages wasting valuable CPU and memory. The user can specify the proper drivers for the particular compilation which will work perfectly with the hardware. Buildroot can output the completed system into an image file which can be dumped directly onto the target device. This seemed promising, so I cloned the git repository and began to experiment with it.

As I began to work with it, I noticed a few unusual aspects of the system. During configuration, it will display the root password in visible plain-text, thereby potentially compromising the security of the system. In addition, the default password encryption scheme is md5, which is not the best choice for an embedded system (or any other system for that matter). Luckily, the user can select stronger password encryption schemes, such as SHA-256 and SHA-512, as alternative options. Also, it does not have nearly as many available software options to use with it as other systems. Despite these quirks, Buildroot looks as if it may be a good fit for my project. For now, I need to get back to my homework and continue exploring later.
#slice2016

Wednesday, March 9, 2016

Getting my Gaming Desktop to Work: New AMDGPU Driver



Shockingly, there was not as much homework tonight - I had a bonus half hour! To take a break before going from schoolwork to homework, I decided to research a way to get my dead desktop working. I do not usually have time to do things other than schoolwork, but when I do, I like to work on my computers - creating code and fixing them. One of the main problems I have with my desktop is with the graphics drivers - which I have been battling ever since I finished building it. Surprisingly, I have managed to make them function in the past.  They just break shortly afterwards, usually because I create an unsolvable dependency conflict which inspires the computer try to uninstall everything. Then I have to reinstall the OS . . . and the drivers are broken again. Each time this happens the old fix does not work anymore. VERY frustrating!

I decided that my focus today was to figure out a way to run games on my gaming desktop again. In its current state, my desktop does not have graphics at all - just plain old text terminals. If I want to play 3D games, it at least needs to be able to display them on the screen. In order to do this, I quickly realized that I was going to need to set up graphics acceleration - which meant getting one of the two drivers, which usually don't work, to work. That was until . . .  I discovered there was a third driver.

Many people with AMD graphics on their Linux system may have painful memories of trying to get graphics acceleration to work, myself included. This has been a significant hurdle for users who want to use an AMD card. However, this may all be about to change. AMD recently released a new driver, rather plainly and simply termed AMDGPU. Their new driver is taking AMD graphics support in a different direction. Rather than using all open source or all closed source, this driver is based on the open source driver, but also contains closed source components that run features which do not work in the open source driver. This is exciting, because people who tested the new driver have had spectacular results. Also, AMD is hoping to open source as much of their code as possible.

Hopefully this means that there will be stable drivers for AMD systems. A test by Phoronix (system testing website) shows that AMDGPU it has the capability of being even faster than DRM (Direct Rendering Manager - not Digital Restrictions Management). AMDGPU is not as reliable as Catalyst, but it may be more reliable later on than the current drivers. This new driver is available in Debian Sid & Stretch, and can be installed as "xserver-xorg-video-amdgpu" (aptitude install xserver-xorg-video-amdgpu). It comes with Vulkan support. The driver is also available in the repos for Arch, Gentoo, Fedora, and openSUSE.

I will try it out soon and report back.
Good luck AMD users!
#slice2016

Monday, March 7, 2016

Feisty Cat

Feisty with the smaller of the two blue bath mats.
My cat, Feisty, kept sitting on my Spanish textbook while I was trying to study. I tried to move him, but he whacked my hand away indicating that he had no intention of moving. This is the same cat that dragged a giant solid rubber-bottom bathroom mat up a flight of stairs, down a hallway, and into another bathroom while still small enough to hold with one hand. That mat was at least twice his weight, but somehow he was able to accomplish this amazing feat many times. Now he is big enough to cover my Spanish textbook entirely, and he is not going to move if he doesn't want to - you cannot pick him up while he uses his disproportionate strength to push you away. In order to complete my Spanish studying, I needed to formulate a plan to get Feisty off the textbook.

While I was thinking, he fell asleep, covering parts of papers below the book. Eventually I remembered how he would sometimes wake up, sprint down a hallway, down the stairs, and stop right in front of his bowl seconds after hearing the sound of food being poured. So, I picked up the bowl, filled it with cat food and . . . he was right there, eating out of the bowl even before I put it on the ground. I finally was able to see the pages of my book and to return to studying. Luckily, after he finished eating, Feisty came back and sat next to the textbook where he stared at me and fell asleep. This was a good compromise because I could now study and take breaks to pet him!


Bonus Tech Stuff: Debian

Debian Linux is arguably one of the most important Linux distributions. Although many will say that it is not the best Linux distribution, it was still important to the popularity of Linux. Debian was one of the first Linux distributions, and provided an easy way to set up a GNU/Linux system early on. Before Debian, the most popular distribution was the Softlanding Linux System, but this was considered buggy by its users. For this reason, Ian Murdock created Debian - naming after his then-girlfriend Debra, and himself. In 1994, the first public release of Debian, version 0.91, was hosted at Pixar. The project rapidly picked up steam, but they still ran into some problems, such as when in November 2002 the building that housed the server burned down.

Debian names all of their releases after Toy Story characters. This started in 1996, when Debian 1.1 was named Buzz. Since then they stopped naming minor releases, and the current version (Debian 8) is named Jessie. Major Debian releases are scheduled to happen once every two years, and allow the user to upgrade without reinstalling (or nowadays, even needing to reboot). Also, Debian uses the Aptitude package manager in order to install software. The Debian package manager has grown rapidly - in 2000 there were about 4,000 packages, and now it is well past 43,000 available packages.

Debian also has many advantages. For example, security problems are reported openly on the front page of their website, and many security bugs are fixed within 24 hours of getting reported. Debian is also rather stable - it has very few noticeable bugs, especially in the oldstable releases. It also has releases which allow you to test out newer software before it gets incorporated into the main release. Another advantage of Debian is that it supports a variety of hardware and has incredibly small memory usage. In its default configuration, it can run in 128 MB with a graphical interface, or 64 MB without graphics. Some extra software can be removed/replaced in order to make it fit in even less powerful systems. It can run on most hardware supported by the Linux kernel.

If you are interested, you can download it here (network installer) or here (live installer). If you do not understand what I mean, I would suggest you use UNetbootin to install it instead.

#slice2016

Sunday, March 6, 2016

Choir and How a Song is More Than Just its Words

As usual, I arrived at choir practice before everyone else. I arrive at exactly the minute that choir is scheduled to start, while everyone else is often finishing up their food at coffee hour. Just to clarify - this is not because I do not take advantage of the opportunity to eat a delicious fresh-baked chocolate doughnut, but because it is so good that I eat it quickly. I then have time to answer older congregants' questions about technology before I head off to practice. When everyone else arrives, we begin our warm-ups. We start by humming the lowest notes, progress to singing "ooo" when we get higher, and then repeat, changing the vowels a few times. Next, we go through the songs we will be singing for Palm Sunday and Easter.

When we sing, I always pay more attention to the flows of the songs than the words - I believe that the flow often carries more meaning than the words alone. While singing one of the songs, I realized that the word "destroy" can be calm if you shorten the consonants and lengthen the vowels - stretching out the "o" vowel into a wavy tune. A smoothly flowing song typically makes the listener feel happy or calm, while a fast song can excite, energize or anger the listener, depending on word pronunciation or rhythm. You can notice another entire level of meaning in a song when you pay attention to the rhythm and pronunciation that is beyond its lyrics. Sometimes the words and music convey a similar meaning, but other times they do not. This does not only apply to songs - the way you say anything carries meaning. Yelling "goodbye" in a gruff tone means something very different than a soft "goodbye." Singing has taught me that the way you say something matters just as much as your choice of words.

Bonus Tech Stuff: SBCs - Single Board Computers

Single-board-computers provide reasonably high performance in a tiny form factor. These devices are typically inexpensive, credit-card sized, and nowadays fast enough to run web browsers like Google Chromium (web browsers use a lot more processing power than one would think). In the past, most of these devices were slow and expensive due to limited popularity, but now that they have caught on that has all changed. Most single-board-computers utilize ARM smartphone processors, which also use minimal energy.

My personal favorite SBC so far is the Orange Pi PC. Although it has limited documentation and a small glitch in the installation process, the OPI-PC is faster than all other SBCs in the <$40 price range, and provides what might be the highest performance/$ of any SBC. It costs $15, and has 4 ARM Cortex-A7 processor cores with a Mali-400 GPU and 1GB of RAM. Although the ARM Cortex-A7 was released in 2011-2012, it remains the most efficient currently available ARM. It will be beat later this year by the upcoming Cortex-A32 and Cortex-A35 cores.

Much more popular than the Orange Pi PC is the Raspberry Pi. While none of the Raspberry Pi's are as fast as their Orange Pi counterparts, Raspberry Pi has great documentation and an assortment of high-quality tutorials. ODROID is another line of SBC's with a variety of available configurations. The ODROIDs are extremely fast, but are also more expensive - the high end ODROID-XU4 costs $74. However, the XU4 features 4 fast cores and 4 low power cores (total of 8 cores), 2GB of RAM (good enough to run a reasonable amount of Chrome tabs), USB 3.0, an eMMC (the type of flash memory used in phones) port, as well as a cooling fan.

#slice2016

Saturday, March 5, 2016

Robotics Club

Today in robotics I worked on writing some of the code for the underwater robot (ROV - Remotely Operated Vehicle) we are building for the MATE competition. More specifically, I was trying to set up communication between the top side electronics, which are above the water, and the bottom side electronics, which are inside our robot. We have an approximately 50-foot tether connecting a power/control box by the edge of the pool to our actual ROV. The data is sent by the tether over a protocol called RS-485, which can only go one way at a time. So the goal of my code was to send data in both directions over a one-way-at-a-time protocol.


Designing my code to function similarly to RDMA (Remote Direct Memory Access) - having the control box read and write to the memory of the ROV Arduino - appeared to be my best option. On my first try, I created code so complicated that even I could not understand it afterwards. Since there are always bugs and modifications, no matter what you do, I knew I had to rewrite the code in a way that would actually allow fixes to be possible. So for my second pass I kept it simple - switch directions whenever there is a read request, send the data over, and switch back. The code looks pretty good, and I think it will work. Even if it does not, at least I will be able to fix it!

Bonus Tech Stuff: Processor Architectures

One of the most helpful features of Linux is its ability to function on many different types of hardware. It can currently function on 29 different distinct processor architectures, which is very impressive compared to the one supported by Windows. This makes it easy for Linux to be deployed on a variety of devices. Some of these devices are very specific, and some of them are very quirky and unusual. Many of them are for very specific purposes. This post is an overview of some the most interesting processor architectures supported.

Here is one of m68k's early uses!
Did you know that some smartphones are equipped with two completely different processor types which can run Linux? Smartphone processors by Qualcomm include both an ARM processor (the processor architecture used in most smartphones to run all of the apps and stuff, both iPhone and Android), and a Hexagon processor. The Hexagon is a VLIW DSP (Very Long Instruction Word Digital Signal Processor) - it is mainly used for audio processing. Another interesting architecture supported by Linux is TILE64 - which isn't 64-bit. It actually has 64 tri-issue cores arranged in an 8x8 grid - yielding an impressive 192 total maximum instructions per clock cycle. Linux also supports OpenRISC - an open source processor architecture. In addition, Itanium - an architecture developed by Intel during the switch to 64-bit - is supported. Itanium was a commercial failure because it was not backwards compatible with previous Intel architectures, and it was easier to use the multicore 64-bit chips from AMD that were backwards compatible.

Here is a complete list of the architectures currently supported in Linux:

Alpha, ARC, ARM, AVR32, Blackfin, C6x, ETRAX CRIS, FR-V, H8/300, Hexagon, Itanium, M32R, m68k, META, Microblaze, MIPS, MN103, Nios II, OpenRISC, PA-RISC, PowerPC, s390, S+core, SuperH, SPARC, TILE64, Unicore32, x86, Xtensa


#slice2016

Wednesday, March 2, 2016

First Slice of Life Post

On the bus ride to school I investigated using an awesome in-browser x86 emulator. It was exciting to be able to run computer operating systems in the Chrome browser on my phone, although the emulator did not like my phone screen. Whenever I tried to scroll to other parts of the page, it would instead move my mouse in the virtual machine. Disabling the mouse allowed me to zoom out, but then I could not do anything else. So I enabled it again when everything was on the screen. The other strange bug was that it would open up a non-functional keyboard and zoom in each time the screen was touched. Unfortunately, the challenge of using the emulator on my phone was not completed by the time we reached school.

After school, when I had logged into Blogger to begin creating this blog, I found myself staring blankly into the computer screen. Despite trying to focus on coming up with something interesting from my day to write about, I found that my thoughts kept coming back to how I could make the v86 emulator faster, or how I could adapt it to work on my phone. As I sat there with the parallel processors in my brain working on each of these tasks, I realized that many of the things that excite me during the school day are related to the new technology and science I have discovered, explored, or discussed with others. So, I have decided to blog about both the events and the technology that I think about during my day. I hope you will enjoy both components of my blog posts.



Bonus Tech Stuff: v86 - Virtual Machine in a Browser



Recently, I found an extremely interesting project which ran x86 emulation in a browser. In other words, it lets you run virtual computers in a browser window. I immediately was interested in the project and began messing with it. It is relatively new, and still has plenty of quirks, but I think it is very useful.

Advantages:
This x86 VM system of course has the main advantage of running in a web browser. It has support for emulated hard drives, floppy drives, and CDs. v86 has many example OS images set up, including Windows 98, Arch Linux, Kolibri OS, Buildroot Linux 2.6, Buildroot Linux 3.18, Windows 1.01, FreeDOS, OpenBSD, Solar OS, and Bootchess (a chess game). The one above is Buildroot Linux 3.18, which comes with a Lua interpreter. This system has support for all of the most common x86 instructions. It has Pentium-1 level instruction support overall.

Disadvantages:
The main disadvantage is, at least in my experience, the system caps out at about 30 MIPS (million instructions per second). This is incredibly slow, considering that my computer can reach 21.6 billion instructions per second on one core, or 43.2 billion instructions per second on both cores (most Intel systems are about equal to this). Most non-web-browser systems stay extremely close to the speed of the actual machine, so this is pretty slow in comparison. The networking does not appear to be functional. Also, some of the images won't load on certain networks. The graphics do not appear to be accelerated - meaning that all of the graphics are drawn on the CPU instead of the GPU. It does not yet have support for SIMD (single instruction multiple data, a technique for speeding up the processing of large arrays). The mouse does not line up, and the interface is totally non-functional on mobile.

Change:
However, do not think that all of these problems are going to be permanent. There have been 35 changes since the beginning of February, and this rate will probably go up if enough people become interested. There are five issues that have been opened in the last 7 days. This is good, because this means that there is planned upcoming changes. Most importantly, this is an open source project, so you can help to improve it on GitHub.

#slice2016