Wednesday, November 6, 2013

Console - Lose the Password, Reap Better Security

Console - Lose the Password, Reap Better Security

Often, that which seems contrary to what we want is really the best way to get it. 

No More Passwords

You may have heard talk about being more secure by not using passwords.  Specifically, if we use means other than traditional password access then the target systems and services may be better defended.  In English, using a PKI certificate, an SSH key, or some other form of credentials is typically more secure than ye olde username/password pair.

I agree with this view. 
For many of my systems, there is no password. 
I don't mean simply that the password is unknown.  I certainly don't mean that the password is blank.  What I'm saying is that there is no usable password.  Anything typed in will fail.  To access these boxes, the user (typically moi) must take one of the other routes. 
It works. 

Unusable Passwords

A quick way to render root's password unusable is to code an asterisk in the password field (either /etc/passwd then 'pwconv' or /etc/shadow directly).  This is the norm for service accounts (bin, daemon, mail, nobody, ftp).  Best practice is to employ 'sudo' for all root work.  Sign on as yourself (with SSH or whatever), then 'sudo' as needed.  There's an audit trail.  How novel. 

So ... you don't really need a root password anyway, now do you? 

But I propose something even more radical. 

I wear the sysadmin hat daily.  Whether for development or for hobby and home, I get to play "root".  (Some of us actually enjoy this, in small doses.) 

Virtualization is virtually everywhere.  Cool! 
But one of the most annoying things about virtualization is the double-signon effect.  In my sysadmin mode, I get confronted with this regularly.  I'm signed onto the host, but the guest throws a password prompt.  [sigh]  Not surprising; makes sense; but is in the way of real work. 

It's worse than "in the way", more than a hassle.  For the guest to process your secondary sign-on, it must have ... drum roll ... a password.  Now wait just a minute.  We're trying to do away with passwords!!  Those of us trying to evolve beyond passwords find this situation positively primordial.  (a prime ordeal)  What to do?? 

No More Login

The modest proposal:  throw a shell on the guest console directly. 

Shocked?  You should be.  (Unless you've heard it before.) 

It's not a new idea.  Some of us have been recommending this for several years.  Do this in concert with unusable passwords.  Replace 'getty' with a shell.  (The security guys usually don't "get it" about the nature of 'getty' so they throw a hissy fit and we sysadmins are forced to acquiesce.)  But it's a good idea, and in the long run more secure (for virtual consoles) than a login prompt. 

With an operational shell, not a login prompt, you get immediately to work.  There is no fumbling around at the guest console.  There is no password to be concerned about.

I wish the security guys (or any objectors) would follow this line of reasoning.  Someone accessing a virtual console has already been vetted by the host.  Someone with control of a guest (even if a password prompt is in effect) can do so much more, whether good or bad.  So if they #1 have been verified and #2 are in a position to completely re-image the guest, then a no-login-required shell is perfectly reasonable.  Combine that with the unusable passwords trick and your virtual systems are hardened on one more front. 

Depending on the physical security of your data center, this crazy concept might be a good idea for physical consoles too.  The security issue is the same: you're sitting in front of a physical machine with full ability to reboot, re-image, anything.  We presume that your data center is not the public library, where password-protected consoles still make sense even though operators freely Ctrl-Alt-Del.  Results not typical.  Use with caution.  Your mileage may vary.  Do not fold, spindle or mutilate.  And I am not a lawyer.

If your shop is small, then you may *be* the security guy.  You could schedule a meeting with yourself, discuss the pros and cons, and convince yourself that this is in fact a good idea.  (But talking to yourself is a red flag for anyone in that role.) 

For Example

For most of my guest systems, I replace 'getty' with a no-login-required script of some sort.  The contents of /sbin/conshell are roughly as follows. 


CON=console
if [ ! -z "$1" ] ; then CON="$1" ; fi
if [ "$CON" = `basename $CON` ] ; then CON=/dev/$CON ; fi
PS1='\$ ' ; export PS1
exec sh -i 0<$CON 1>$CON 2>$CON


Yeah yeah ... there are spiffier ways to code the conditionals.  (Where is Jon Miller when I need him?)  I've left this as-is because it works on a wide range of shells.

Thanks for reading.  Stay safe.


-- R; <><



Friday, October 18, 2013

Lessons 67 and 68 - Internet Outage

Lessons 67 and 68

So far, I've lost 8 hours and $150.  But we have internet again.

It was Tuesday. I am normally at the office on Tuesdays. But my teammate was out and I have a full plate, so I thought I'd save the commute time and work from home. While sitting with my wife and contemplating the day before us, I noticed the lights blink. We both heard a beep.  Right away I knew two or three appliances had reset, including a server or two.  No biggie, so I thought.

When we have an outage, I try to learn from the mistakes. Whether I missed a step or some system has let me down, it's a healthy challenge to review and adjust.

Mistake Number 1 - delayed UPS battery replacement

The beep was from the UPS which covers our "important" computer gear:
the cablemodem, the router, and the main server.  The BSL1079 (lead/acid "gel cell") failed long ago, but I had been using an aging car battery.  This was not simply putting things off. The car battery, even aging, has more than ten times the capacity of the normal UPS battery.  But either the battery had aged more than I knew or my spit-n-bailin-wire rigging had loosened up.  This was totally my fault.

Lesson: just order the [expletive deleted] normal battery and do high capacity as a separate project.

Mistake Number 2 - mixing services ... and service levels

Some time back, all our stuff hubbed off a server called "main".
NFS, YP/NIS, SMB, NTP, DNS, SMTP, IMAP, internal HTTP, and notably DHCP. Most of these services have been doled out to dedicated appliances or to service providers.  The exception is DHCP.  So when the primary server powered up, it had these old filesystems to check. (Things did not come down clean, so an integrity check is warranted.) DHCP had to wait until that was done.

The filesystems are still used, but with a lower service level requirement. DHCP has a much higher service level requirement, especially with increased WiFi. So the idea that a high priority service is waiting behind a lower priority service is bass-ackards (as we say in Texas).  This will change.  My fault, there's history.

Lesson: consider service requirements and plan accordingly. (DHCP will move)

Mistake Number 3 (not mine) - deceptive diagnostics (and this was the worst)

With the server back, and the network units functioning normally (I thought), I checked on our IPv6 tunnel server.  This is a Xen virtual machine hosted by the same physical box as "main".  Native IPv6 is not available yet where we are, but the SixXS tunnel does nicely.  But the tunnel wasn't starting.

Turned out that IPv4 connectivity was still down.
Turned out that the router had no DHCP lease from our ISP.
After multiple (controlled) on/off cycles of both the cablemodem and router, the relationship was still "we're not talking".  Activity lights, yes, but all zeros for the external address. Plugged in a laptop directly to the cablemodem; got a lease!  So that indicated clearly the router has failed. Clearly.

This NetGear router has been giving us a little trouble on the WiFi side. Dunno if it is just RF interference or perhaps something we can blame on the internet provider.  There are gaps in 802.11 coverage inside the house, dead zones. So off I went to Best Buy, returning with a shiny new LinkSys "AC" model, with A/B/G/N backward compatibility.

The new router also failed to obtain a DHCP lease.  Huh?!?

Pause and reflect:  Old router, no lease.  Laptop, yes lease.
But new router, also no lease. The cablemodem is just not smart enough (one would think) to distinguish between a "computer" and a "router". I had already tried cloning the MAC address of the laptop to the old router so it would look to the provider's DHCP server "just like the laptop" (which had succeeded in DHCP).  No joy.

This is where I lost the rest of the day ... multiple reboots, power on/off cycles, WiFi reassociations, and DHCP transactions (on the "inside").  Cablemodem worked with two computers, failed on one, failed on both routers.  The new router got a DHCP lease on our internal LAN, and then so did the old router.  [sigh]   (I could have gotten a better priced on the new router if I were not in a rush from the outage.)

What a waste of time.

What finally worked was to put our old "firewall" on the cablemodem. This is a Linux box with two ethernet ports.  That's one I got right. And the "mistake" was misleading cues from Time/Warner Cable's device.

Lesson: hang onto what worked before, at least one generation.Maybe consider changing internet providers!

Mistake Number 4 - too much reliance on internet (?)

Uhh... maybe not.
We do *business* online.
For most transactions, using the Internet is no less legitimate, even more reliable, than using the telephone.  So what are you gonna tell me? Too much reliance on the phone?

I grant you that if we rely on internet for these things we need to have reliable alternatives, whether a procedural habit fall-back to voice or perhaps redundant internet service.  Yeah ... that's it.  Redundant internet service.  In a prior job, my employer paid for internet and we paid for a second line, keeping personal and "work" on separate channels. WHEN (not if) there was a failure on one, I would switch all traffic to the other.

The idea that Joe Suburbia have two internet contracts is silly.
The idea that my family have two internet contracts is less so, but still pricey.

Lesson: be prepared, and have alternatives.

I wrote this in a hurry because I thought it should be dispatched quickly. Hopefully it's not too jumbled.  Hopefully your internet experience is more effective.

-- R; <><


Wednesday, September 25, 2013

Kissing Google Calendar Goodbye

Kissing Google Calendar Goodbye

Life is short.
With Google, life is even shorter.

It's Dead, Jim

My calendars are all dead.  This just happened.  (Not even sure "Jim" will find this article on his own, so I'll have to point him at it.)  My calendars are on Google.  I haven't been paying close enough attention.  Google had a plan.  I didn't know.  Now I know.

Comedian Pete Holmes put it well, saying about Google, ...


It's worse for technoids.  We don't know that Google is ruining our lives.  (Actually, we do know, but we don't care.)  To ask Google instead of having to "ask real people" is ... umm ... helpful.

I've always said: I use Google, but I don't trust them.  And when I outsourced my domain to Google's "Apps for Domains" I held back just a little.  I expected it would not last forever.  I expected Google would eventually serve Google's own needs and would not longer suit ours.  So I prepared for an exit.  I hope I prepared well enough.  Did not expect to leave quite so soon!

Geeks Go Gaga

Google gets it.  About advertising?  No no.  Consumer services?  Not that either.  Enormous volumes of data?  Not even close.  Google knows how to seduce.  Specifically, Google knows how to seduce geeks.  Seduce ... such a powerful word ... so seductive all by itself.  But seduce is exactly what "they" did.

They drew me in like they drew in thousands of other nerds.  They appealed to me with technology.  They spoke my language.  They used my protocols.

Today (the latter half of 2013) boring things like Internet standards are not as well known on the Internet.  Today the transition from technical playground to transport for Twitter is complete.  It doesn't matter today whether we use the "right" tool or service or protocol.  It doesn't really matter if things work!  It only matters if the users can be herded before the advertisers.  (The law of unintended consequences fits here.)

You Had Me at :-)

The seduction starts with just a word.  GTalk impressed me.  Whatever other channels GTalk may have used, it used XMPP.  (Since ours was an open relationship, I didn't care.)  XMPP is the Internet standard protocol for chat, originally known as Jabber.  Using XMPP was the right thing, better than AOL or Yahoo or MSN.  Using XMPP was moral.  I could see that Google was appealing to us geeks.  I thought their commitment to the protocol would last.  They were the guys who said "Don't be Evil".

So I started using GTalk.  Then I came to prefer my GTalk ID over other IDs available for chat/IM.  Then I learned that GMail also used Internet standard protocols, specifically IMAP, so I started using GMail over other mail services.  (You could actually get better-than-basic service from Yahoo but they charged real dollars for that.  Not much, but non-zero.)  From there the addiction grew:  Google Calendar spoke CalDAV, and Google Docs could translate into several formats.  (Not Internet standards but industry standards.  I lost focus a bit.)

The beauty here is difficult to explain to a non-tech.  When using standard protocols, I can use other tools to work with my stuff.  I can use Pidgin for IM, Thunderbird for email, and the Lightning plug-in for the calendar.  In fact, I can use any of several groupware packages, even pick-and-choose specific apps.  Subtle advantages arise, like the ability for me to move the stuff where I want it.  Normal people (that is, you non-techs) don't care about this quite as much. 

I thought that Google, being geeky, would continue to use the standards.  While I knew there would come a change, I expected it to come from another direction.
 It's Just Business

Business is not evil.  I am a firm capitalist.  Google's bottom line is [drum roll] their bottom line.  Google as a company exists to make money.  I am a capitalist, and I am an engineer, so I understand the need to optimize.

The popular danger of our time, in business and in engineering, is too much optimization.  Cut some fat but leave the steak flavorful.  Exercise to get lean but don't driver yourself gaunt.  Google has been optimizing.  They said they were going to drop CalDAV, and they did.  It wasn't personal.  It was just business.  But it's personal to me.

Understand that Google has not locked "us" out of remote access to calendars.  (That is, we aren't limited to web access.)  What has happened is that the widely used calendar protocol has been quashed in favor of Google's own API.  They are similarly moving away from XMPP for GTalk, replacing that standard for IM with Hangouts and another Ghougle-ish API.  Like that great Internet hero before me, Al Gore, it's time for me to go.

I will gladly pay real money for an "Apps for Domains" like service that speaks standard protocols.  Having second thoughts about snubbing Yahoo mail?  You betcha.

There Went the Neighborhood

Back to that law of unintended consequences.  "We" wanted the Internet commercialized.  We wanted it accepted.  And now it is.  So we have people in the driver's seat who don't know how to drive.  Oh, they can press the accelerator.  In fact, they really like that pedal especially.  But not knowing the value of the brakes, they overshoot many destinations.  Not knowing how to steer, they miss many a turn (and take out a few pedestrians along the way).

It's not actually the new neighbors that are the problem.  It's the home owners association, ignoring well thought community governance, adding senseless deed restrictions, and neglecting the common grounds.

And, yes, this rant on Google brought to you by Google, who provides the Blogger service you're presently enjoying.  (But it is copied to another medium, while that capability persists.)

-- R; <><



Monday, June 10, 2013

dubya, dubya, dubya

dubya, dubya, dubya

Remember "www"?  It was that prefix of letters seen on every web site, the clue that "this is a world-wide-web thingy".  But everything is on the web, so who needs the extra typing?  Sheesh ... what a hassle!

Let's get rid of useless hassles.  Let's get rid of redundant things.  Such waste.

whining, whining, whining

In context, that old "www" meant something.  In the context where the world wide web was created, there were other things.  The "www" sites were not all there was.  Originally, "www.blogger.com" was but one of several computers or services on the internet (under the domain of "blogger.com").  I am picking on Blogger since that is where I am posting.  As it happens, there is a "mail.blogger.com" which is presumably used for ... drum roll ... email.

You may notice that Blogger, turning out to be an excellent example, has lots and lots of "sites" under its domain.  YES.  That's what I'm talkin about.  Each one means something.  These are under "blogspot.com".  But this is not true for sites like Twitter.  (And it's not true for all blogging destinations.)

I blame excessive consumerism.  I blame "user laziness".  (Something about automobiles and the loose nut behind the wheel might fit here.)  (Being a loose nut myself, I can fairly jab at loose nuts everywhere.)  Twitter has a fixation on shortness, so they might actually get a hall pass this period.  Omission is not new.  People have been leaving off the "http://" part for years.  Ewww...  it's so ugly, so technical.  But that too actually meant something once upon a time.  (It means something even now.  You have been warned.)  Sorry, kiddo, you had to get a drivers license to hit the road.  Cars are technical and there's a certain amount of technical prowess the rest of us expect from you, Mr. Wheel Nut.  The format of a URL (that is, an internet web address) was well thought.  But it's technical, so maybe you need your computer license before you hit the information superhighway.

Sadly, even excellent engineering feats like Firefox add to the problem and push us further down the slippery slope.  (FF now removes the "http://" part even if I manually include it.  Look who's a loose nut now!)  (You can restore the original behavior if you aren't scared off by the warranty warning.)  Consumerism!

three legs to stand on

As I type this post, I am seated on a chair.  I could type just as well standing up, but since I will be at this keyboard for most of the day for many other reasons, I'll keep my seat, thank you.  Now ... a chair really only needs three legs.  We see this all the time.  In fact, three-legged chairs are quite fashionable.  So let's just whack one of the legs under me.  No thank you.  Removing one leg in this case would lead to ... drum roll ... instability.

Who cares?  Obviously, I care.  Obviously, I think others should care.  But it's reasonable to wonder why Sir Santa is spewing.  It happens that I was reading another excellent blog post about the current national crisis.  As with a lot of bloggers, this fellow does not have a "www" in his URL.  [sigh]  As it happens, he's having DNS problems (and makes a point that they are not "issues").  His DNS problems are probably not related to his omission of the dubya, dubya, dubya, but it's an interesting correlation.

The Point

I made a little analogy with chairs.  We're talking about structure.  The internet is losing structure in other ways.  (The glut of new top-level domains is a sign ... and a worry.)  If we believe in rough consensus and running code, then we don't always care.  But being a coder, I know that removing legs sometimes knocks you over when you don't expect it.  I happen to know an engineer who removed some working code recently.  He thought he should remove other code ostensibly to eliminate redundancy (redundancy being not a bad thing) and reduce overhead (overhead seen not in that case).  He is now experiencing problems with his new code, instability at least and maybe worse. 

So ... Maybe not all those hassles were useless.  Maybe not all the apparently redundancy was just for reasons of repetition.

Bring back the dubya, dubya, dubya.

-- R; <><



Wednesday, March 27, 2013

SUDO or Not SUDO

I recommend 'sudo' for root access.
As a rule, I do not login to "root" directly, and I encourage others to do the same.

It's pronounced "soo-doo" because you're DOing something as the superuser.  So the subject of this post should conjure a Shakespearean theme.  Do you 'sudo'?  That is the question.

This topic came up in IRC conversation with Russ at pmman today.  He ended with "you need to blog".  So let it be written.

When fronting via 'sudo', access to root is better controlled, logged, audited.
Non-root credentials are doled out to individuals.  They sign on each with their own ID.  (No sharing!)  Users with 'sudo' rights can then perform any needed activity, with records pointing to the correct "who dunnit" when things go awry.  (If nothing goes awry, then there's nothing to worry about.)  People, this is a very good way to run your systems.  This is so painless that I do it even on systems where I'm the only admin.  Like Nike sez, "just do it".

Computing is personal.
Even administration and maintenance is personal.
So one should connect to the target system with one's own ID.
Once connected, one has rights.  (One also has responsibilities. That's part of the deal.)

Using 'sudo' gives us at least three ways to escalate into UID 0 privileges: console or other password based sign-on, SSH sign-on (for which keys are a better authentication method), and 'sudo' (having authenticated as an individual before becoming root).  I recommend dispensing with passwords altogether.  Like Nancy sez, "just say no!".  Root access on the console ... there are so many ways to do that.  (And some won't like what I recommend, but I press on.)

Using SSH (with keys) to go directly to root is not recommended.  Never the less, it's better than password access.  (Discussion of direct SSH sign-on as root is beyond the scope of this post.)

About root's password: get rid of it.  Render it unusable.  "But what if I need console root access?"  There are many ways to get root access from the console (physical or virtual).  Short answer: skip them all and throw a root shell on the console at startup.  When you think about the physical (or virtual) security of the machine, and the many ways of recovering root access, this makes perfect sense.  A password merely delays the inevitable: console access means root access.  "But a password slows down the bad guys."  Time is relative and "slows down" hurts the good guys too.  Do not be deceived.

'sudo' expands your options. 
With careful configuration, you can be selective about what a "sudoer" can do.  In English: 'sudo' access does not equal root shell access.  True, the non-shell programs which one might authorize via 'sudo' should have no shell back door.  This is an ongoing requirement.  Duh!  So 'sudo' not only gives full root shell authority to the inner circle, it also allows selective privileged operation by the somewhat less trusted.  (Lowering the phone call rate is a Good Thing.)

All this being said is in context of no target password.  See the "NOPASSWD:" tag in the examples shipped with your system.  Using a password (of the originating ID rather than of the target) may fit your policy.  Other challenge/response schemes may be warranted.  That's fine, but beyond scope like those SSH details.  Again, consider the actual security provided by your policies and consider the scalability of your procedures.

Summary: install 'sudo' (if it is not already, which is rare), then ...

  • authorize yourself (where "yourself" means the admins reading this) 
  • stop signing on to "root" ... don't do it anywhere anymore
  • sign on as yourself, then 'sudo' as needed
  • put a root shell on the console (eg: by way of /etc/inittab)
  • render the root password unusable 
  • consider non-shell operations for less trusted users who need them

-- R; <><



Thursday, January 3, 2013

Partitionless for Performance

Geeky Gadget Gifts

I got a new disk for Christmas.

The disk in my primary workstation (a nice Lenovo ThinkPad) had begun to fail.  So my wife gave me a shiny new Seagate "Momentus" to replace the stock Hitachi.  I had fun copying the filesystems ... two dozen of them.  (Tedious, but nice to start fresh.)  "On a clear disk, you can seek forever."  (voice of Barbara Streisand)

I decided to go partitionless.
The old disk had four partitions:  two Windows partitions (came with the machine), a Linux boot partition, and the largest given over to LVM (logical volume manager) as a PV (physical volume, the backing store for Linux LVM).  That large partition was the sole PV in a VG (volume group) containing more than twenty filesystems.  The new disk is just one grand PV; no partition table.  Works.

It seems a little silly to have a partition table for just one partition.  Never the less, the concept of partitioning disks is so entrenched that we go that route by default.  Most uses do not require partitioning.

Using LVM in place of a partition table is really a whole different story.  If I had done things purely for the sake of this little blog post, I would have made one huge 750G filesystem (probably EXT4).  Been there; done that.  It works too.

Cheating ... a little

The title of this post is "Partitionless for Performance".  I'm stretching the story a bit.  The performance boost from discarding the partition table alone is negligible.  However, if your team gets used to the idea, then you may save on the human aspect of "performance": there would be one less thing to have to manage.  Put an arrow here.  This is the point.

Well ... there's also the point that we purists like getting rid of unused layers.

Running partitionless is kind of a big deal for me.  I do a lot of virtualization.  For virtual servers, unpartitioned "disks" make a lot of sense.  For example, servicing a filesystem from the "host" side is a whole lot easier when the (virtual) has no partition table to interfere.  If the virtual disk is represented by a file or a logical volume, you can mount it directly.  Voi-la!

So this is really more about manageability than about run-time performance.

The advantage seems obvious, but many people don't know it can be done. 

Disclosing Performance

The laptop is not virtual.
Going partitionless on the new disk was more to prove the point.  It works.
And when running partitionless, you could avoid LVM too.  (Except that PVs are identified by their UUID, so there may be less confusion w/r/t content.)

When it comes to operational performance, LVM can actually make things worse ... or better.  LVM is way more sophisticated than plain partitioning.  It naturally brings more internal complexity with it.  I'm looking for hard numbers, and I confess that I don't have them yet.  (And ultimately, if you can't measure it, ... you know how that goes.)

The wisdom of the community says that LVM adds little to the burden of disk I/O.  So until we get factual measurement, that's a smart place to start.

But then there is a case for a win, and it's a biggie.
LVM can add striping and/or redundancy.  Striping, if not offered by the physical backing storage, does improve performance.  (And redundancy helps a lot for recovery.)  There are measurements of the performance gains from striping.  (The value varies a lot based on your workload.)

The Road So Far

There is a bootstrapping issue.
Most bootstraps do not grok LVM.  They cannot pierce the LVM veil like they can pierce the partitioning veil.  I chose not to resolve this.  My solution is to boot from USB media.  (There were other reasons that removable boot media seemed like a good idea.  Maybe will talk about it in another post.)  It's been less than a week, and things are working very very well.

So the conclusion is this:
If you're running virtual, consider unpartitioned filesystems at offset zero.
When you're not running virtual, consider using LVM instead of a partition table.
(Perhaps not for your boot disk, but you get the idea.)