NAOMI Updated Pics

Here is the latest picture I have of the ‘Semi-Portable’ version of NAOMI.

This includes the following:

  • 30AH Battery @ 2.2a (tested)
    • 35AH LiON Cells “Quick Drain”
  • Touch Screen 7″
    • Mounted Externally in own case
      • Not Permanent, built for bench
  • RPI2
  • 1TB USB 3.0 Hard Drive
  • Bluetooth Dongle
    • Not Shown
  • Powered USB Hub
    • Replaced/Rebuilt for one with switched ports
  • Logitech C170 USB Camera/Mic

Projects Page Added!

As requested, I have created a place for my projects on my site.  Please feel free to follow me as I work through them.  They are not projects that I am taking on professionally. They are completely for fun, and with no expectation or time limits.  That being said, I am a huge supporter of open source.

So in that spirit, I will be posting all functional code when I feel its ready to be used.  I am always open to ideas/suggestions.  Feel free to contact me via my Contacts page any time.

Thanks,
Matthew D. Curry

terminal

Quick Tip of the Day.

Not that I have them daily, but I might if I get a good response.

Have you ever tried logging into an SSH server, and get a weird error:

/.ssh/config: line 22: Bad configuration option: \342\200\202

This is a very simple issue but it can be a huge PITA if you can’t fix it quickly.  This is especially true for those of us that have to use an enormous amount of keys in our daily lives.  I know, I have a fairly simple config for SSH, but I still ran into this issue when I pasted a block of text in  ~/.ssh/config.  I opened the file with VI, and NANO. I was only able to get it to work when I removed the spaces before each line it complained about.  I then just put them back as normal, and saved.

It turns out, that copying from another place can have the spaces not interpreted properly. They are tabbed indentations actually.  Once manually removed they are replaced by a normal “space” in the code and it should work properly.  I hope this saves some time for some people.

 

Thanks,
Matthew D. Curry

 

2015-10-30

Thank you all for the support

As many of you may know, my wife and I were hospitalized a day apart.  This was completely unrelated, and unexpected.  Luckily, I had family that was able to come help us out while we were in the hospital.  That, in conjunction with the many others that have shown support, and the hospital staff that helped us while we were there made it possible to keep us healthy, and together as a family.  I can’t show enough gratitude for everyone’s help.

Also, to my employer Welltok, who has been extremely supportive throughout the ordeal.  You have my deepest thanks, and I am forever grateful; and yours as an employee.

Again; thank you to everyone, even those not mentioned  (You know who you are).

 

Sincerely,

Matthew D. Curry

Husband, Father, Nerd, and Son.

Top500_logo

Top 10 Most Powerful Computers in the World!

As some people may know; I like to use the example of  “Out of the top 500, how many supercomputers in the world do you think  use [your OS here]?”.  I use this example for a reason.  Unlike conjecture, theories, and even “gut feelings”; it shows that the OS was chosen, and serious money and time were poured into it.  This shows that is it at least a player in the field, and can handle the serious levels of computing needed.  It also shows things like, tune-ability, flexibility with change, and a few other things that only the people working with their own projects will be able to tell.  We are not concerned with that, just that they are on the list; and why if it sticks out as odd to see why.

I am using the TOP500 Project for this information, and it does get re-evaluated often.  However, I will say this; just like in nature there will always be a pretty clear delineation when it comes down to who is the “Winner” in this space.  Or the most efficient/successful like in nature.

The reason I write this article is to show a certain group of people an example of the difference in reality versus what advertisements, rumors, and self-perception have molded in their minds.  I think it is a good exercise to show the affects of the budgets used to change our perceptions. A good example is the budget used by companies like Microsoft (over $2,000,000,000); and Apple (over $1,000,000 in 2013/2014).  This is just money, and perception however.  This will never affect real life numbers on performance, math doesn’t lie. Results and time are my favorite sources of information.  Its a simple concept, there are 1000s of ways to move items.  However, in our lives in this age of technology the wheel is still very much in use.   Its the best solution at the time…. (a small but powerful statement).

So in lieu of keeping you away from the data pron you came to look at:

 

Top 10 Super Super Computers - Nov 2015

Top 10 Super Super Computers – Nov 2015

 


 

Here is the actual breakdown by OS (as of Nov 2015), as I know that is what most people are interested in:

List_Statistics___TOP500_Supercomputer_Sites

 

I know some people are wondering where is Windows, or IOS.  If you will notice; even in the breakdowns by OS they are pretty much all Linux/BSD [Windows isn’t even a filter option on their site as of right now].  This list is more a list of flavors of Linux with a few BSD machines in the list.  You will notice, like in most of the lists done in the past; there is not a single Microsoft product on the list.  This is a very simple, and very important fact that way too many people make.  This includes everyone from the guy that fixes your desktop all the way up to the VP of Infrastructure.  They are all swayed by what they own, and the amount of advertising spewed forth at them non-stop. Keep in mind these people are generally more connected than the rest of the populous.  Thus, affected even more so by the ads.  Combine that with sales people/calls/pressures on support/etc, and you get smart companies ( I didn’t say large), that make really bad technology decisions.  I exclude myself from that ad loop; and look purely at statistical performance.   This is great for business; since you can eliminate a TON of costs; and licensing fees/etc that you would have if you went with something not on this list.

 

List_Statistics___TOP500_Supercomputer_Sites 2


 

Want to see the full list of over 500 super computers for 2015?

terminal

Time Machine Backup with Ubuntu 15.x and OSX 10.7+

In the Older versions of this how-to, you will see people use the method shown just below.  Obviously that no longer works.  I will walk you through setting up TimeMachine via AFP over your LAN.  This is using an Ubuntu 15.04 machine; but since all the packages are common and in the base repos; I don’t see there being a problem getting it to work on any distro.

 OLD METHOD:
defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1
NOTE: Any OSX after 10.6 Lion, will have to use the method demonstrated here.

 

Step 1: Install Netatalk

Install the following packages:

sudo apt-get install netatalk libc6-dev avahi-daemon libnss-mdns

Step 2: Configure /etc/nsswitch.conf

Once those packages are installed, we have to adjust 4 configuration files:

sudo nano /etc/nsswitch.conf

Locate the following:

hosts:          files mdns4_minimal [NOTFOUND=return] dns mdns4

Add mdns  as below:

hosts:          files mdns4_minimal [NOTFOUND=return] dns mdns4 mdns

Step 3: /etc/avahi/services/afpd.service

sudo nano /etc/avahi/services/afpd.service

Paste the following:

<?xml version="1.0" standalone='no'?><!--*-nxml-*-->
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
    <name replace-wildcards="yes">%h</name>
    <service>
        <type>_afpovertcp._tcp</type>
        <port>548</port>
    </service>
    <service>
        <type>_device-info._tcp</type>
        <port>0</port>
        <txt-record>model=TimeCapsule</txt-record>
    </service>
</service-group>

Step 4: /etc/netatalk/AppleVolumes.default

Now we setup the share:

sudo nano /etc/netatalk/AppleVolumes.default

At the bottom the section that reads:

# The line below sets some DEFAULT, starting with Netatalk 2.1.
:DEFAULT: options:upriv,usedots

# By default all users have access to their home directories.
~/                      "Home Directory"

# End of File

Change the path “~/”  to your share directory.

IMPORTANT:  Don’t forget to add ‘tm’ to options:upriv,usedots
As seen below:

# The line below sets some DEFAULT, starting with Netatalk 2.1.
:DEFAULT: cnidscheme:dbd options:upriv,usedots,tm

# By default all users have access to their home directories.
/path/to/share                       "Time Capsule"

# End of File

Step 5: /etc/default/netatalk

Now, we need to adjust netatalk settings.

sudo nano /etc/default/netatalk

Locate the following section:

#### Set which legacy daemons to run.
#### If you need AppleTalk, run atalkd.
#### papd, timelord and a2boot are dependent upon atalkd.
ATALKD_RUN=no
PAPD_RUN=no
TIMELORD_RUN=no
A2BOOT_RUN=no

Update it to reflect the following:

#### Set which legacy daemons to run.
#### If you need AppleTalk, run atalkd.
#### papd, timelord and a2boot are dependent upon atalkd.
ATALKD_RUN=no
PAPD_RUN=no
CNID_METAD_RUN=yes
AFPD_RUN=yes
TIMELORD_RUN=no
A2BOOT_RUN=no

Once all is completed, and all the services have been restarted on the Ubuntu server.  The drive should show up under “Select Disk” under “Time Machine Preferences”.  If you have an old one, you may have to remove it.  Once selected you can use it as if it were physically plugged in.

 

*TIP – Use a wired connection only, and a gigE network will help immensely.  I do not recommend WiFi .
Wireshark

TCPDUMP with Date for Wireshark

Just another handy snippet:

It will date the output, and also put it in a handy pcap for Wireshark.

tcpdump -i eth1 -s0 -v -w /tmp/capture_`date +%d_%m_%Y__%H_%I_%S`.pcap

*Note: Should work on all Linux distros (make sure to have the right network interface selected, Ex: eth1); it might have to be slightly modified for Mac. Windows can go DIAF.

 

terminal

Search entire server for Q4 2015 obfuscated PHP malware of unknown origin.

This is just a snippet I have used before to identify some malicious code on web servers.  This will not work on everything; but it will give you a way to find suspect files.  It is easy to cron in a script with others to make a nice daily report if you have those concerns.

#!/bin/bash
# Malware Search Script
# 11/1/15 – Matthew D. Curry
# Matt@MattCurry.com

echo “Search entire server for Q4 2015 obfuscated PHP malware of unknown origin.”

find / -name \*.php -exec grep -Hn .1.=…….0.=…….3.=…….2.=…….5.= {} \;

 

Hope this helps, enjoy.

terminal

Remove Spaces (or any character) from File Names in Linux

This is actually a pretty common thing to run into in a Linux file system.  It is especially prevalent in the files that are moved from another operating system (Usually Windows).  So if you get files that need to have a space or a character removed, the below snipped is a very simple, and handy way to fix this issue.

└─(11:26:40)-(~/Example)->ls
file 1.txt
file – 2.txt

So, from here we want to rename the file “file 1.txt” to “file_1.txt”.  This would be done as follows:

rename ‘s/ /_/g’ file\ 1.txt

This will remove any spaces in the file-name listed.  If you want to do all the files in a directory:

rename ‘s/ /_/g’ *

Here is an example output if we run it on all the files in the directory (as seen above):

└─(11:33:59)-(~/Example)->ls
file_1.txt
file_-_2.txt

 

Note:  If you are new to Linux; and you haven’t heard of the “sed” command, that is the syntax used in the command.  If you get comfortable with this, then you can easily learn sed, which is a great tool to have on the command line.

Want to use DNF? What to expect…

DNF actually DOES stand for something… Not sure where that started.

DNF stands for Dandified yum

DNF started showing up in Fedora 18, and Fedora 20 was the first Linux distro that welcomed users to utilize  DNF in place of YUM.

The technical challenges of DNF are that there is little or no support for features:

  • Debug
  • Verbose output
  • Enable Repository
  • Exclude packages during install
  • No effect of –skip-broken switch
  • The command resolvedep unavailable
  • The option skip_if_unavailable is ON by default
  • Dependency resolving process is not visible in Command Line
  • Parallel downloads in future release
  • Undo History
  • Delta RPM
  • Bash completion
  • Auto-remove
  • many others…

 

In short, if you drink the cool-aid then you should run this in a lab only.  I know people that try to run this stuff in production.  You are just asking for a serious problem. Other than that, I hope it gets there, DNF is just too new.