Friday, November 18, 2011

Here's How Box.net Wants To Be More Like Facebook

Box.net, the cloud-based collaboration startup that's taking on Microsoft in the enterprise, wants to be more like Facebook.

Not that it wants to be come a consumer oriented social network. Rather, it wants to become the platform on which other enterprise startups build successful businesses. Basically, Box.net wants to foster its own Zynga equivalent.

To spur that on, the company today launched a program called the Box Innovation Network or /bin (the slash is kind of an inside joke -- /bin is a term used in Unix commands).

As part of /bin, the company is offering up to $2 million in funds to encourage developers to build business apps that use the Box APIs for collaboration and content management.The funds will be used for equity investments, intellectual property purchases,

and codevelopment.

CEO Aaron Levie told us, "If you think about enterprise software today, it's complicated to build and integrate, expensive, and hard to customize. It's not built for the way we work. In all three areas we are trying to drive innovation, but can't do it alone."

Box is partnering with several other companies on the initiative, including VMWare, Rackspace (hosting), Salesforce's Heroku (app development), and Twilio (for voice and SMS-enabled apps).

Monday, October 3, 2011

Oracle President Attacks Microsoft, HP, IBM and SAP

Oracle President and CFO Safra Catz today said Hewlett-Packard may be a “former” partner, claimed Microsoft is distracted by 16-year-old consumers, and said IBMers should hide under their desks in Armonk, N.Y., as Oracle’s Exadata business marches forward. Catz’s comments, some tongue-in-cheek, surfaced at Oracle OpenWorld today in San Francisco. Here’s the blow by blow.

Catz and Oracle (ORCL) Channel Chief Judson Althoff shared the stage during Oracle PartnerForum, part of the broader Oracle OpenWorld gathering today. Amid an audience of roughly 4,500 Oracle partners, Catz took aim at Oracle’s four fiercest rivals. Her comments were captured in this FastChat Video:

Thursday, July 21, 2011

Three free tips to better protect your iPhone

Smartphone security expert Graham Lee offers some simple advice on how better to protect your iPhone or iPad.

The iPhone - along with the rest of Apple's iOS product family - seems to me to be the TARDIS of the computing world.

There's a full-featured UNIX computer with almost permanent network access, and it fits in my pocket: surely it must be bigger on the inside. Apparently you can even use them to make phone calls, too.

It certainly puts my first portable to shame.

Of course, such a powerful computer must be protected, particularly when you use it for sensitive tasks like email and editing work documents on the move. So here's a short list of iOS tips to help you stay secure using your iPhones and iPads.

1. Set the passcode

Passcode screenAll of Apple's products that run iOS allow the user to configure a passcode. The passcode controls access to the apps and data installed on the device. No passcode, no data - and there's no way to get around that, because content including saved passwords and mail attachments is encrypted so that without the passcode, iOS can't read the content at all.

To enable the passcode, first launch the Settings app. In the "General" section, look for the "Passcode Lock" setting. Tap that, and you'll see a screen that allows you to turn the passcode on, and to define when it's required and whether to use a "simple passcode" (a four-digit PIN) or a longer password.

Even though iOS is designed to slow down "brute force" attacks (where the attacker enters multiple guesses at the passcode until he finds the correct value), guessing one of the 10,000 simple combinations is very quick.

Particularly if you use one of the most common PINs.

It's best to turn simple passcode off and use a stronger password, following Graham Cluley's advice.
2. Don't jailbreak

JailbreakMeBy default, Apple limit the software that will run on your iPhone or iPad to their own apps, and anything that you download through their app store. They do this to restrict the chance that malware gets onto the devices, and so far it seems to work: iOS has not seen the same malware problems that have plagued Android.

Google are more permissive about the software allowed in their marketplace, and allow installation of non-marketplace apps: both good avenues for getting malware onto a mobile phone or tablet.

Of course, some people (including regular Naked Security contributor Duck, who discussed the issue in a recent Chet Chat podcast) see this as an unwelcome limitation on what they can do with the phones that they paid for.

Such people may turn to jailbreaking to remove Apple's limitations, so that they can install unapproved software or reconfigure the operating system.

Down that path lies iPhone malware and an easy route for attackers to install remote access tools, keyloggers (well, taploggers I suppose...) and other nasty things.

"Grange Hill" stalwart Zammo would probably agree with me here: when it comes to jailbreaking, just say no.
3. Be careful of where you surf

Phishing, and other scams like the recent iTunes giftcard ruse, do not depend on your technology choices: they're designed to fool you, not your computer.

Mobile SafariWith that said, it's perhaps easier to be taken in when surfing with Mobile Safari: user interface hints including the location bar and the SSL padlock are smaller, and in scrolling to read a page's content you'll push them off the top of the page and perhaps forget to check that you're on the correct site.

Especially if you've just snuck your phone out during that boring meeting, and are still half-listening to the Q3 sales projections.

Personally, I reserve sensitive tasks including online shopping and banking for either native apps released by the banks and stores, or for the desktop browser where it's easier to see whether I'm on the right website.

I hope you found those tips useful. For more chat about mobile security and privacy, please follow me on Twitter.

Thursday, June 30, 2011

Robert Morris, Pioneer in Computer Security, Dies at 78

Robert Morris, a cryptographer who helped developed the Unix computer operating system, which controls an increasing number of the world’s computers and touches almost every aspect of modern life, died on Sunday in Lebanon, N.H. He was 78.

The cause was complications of dementia, his wife, Anne Farlow Morris, said.

Known as an original thinker in the computer science world, Mr. Morris also played an important clandestine role in planning what was probably the nation’s first cyberwar: the electronic attacks on Saddam Hussein’s government in the months leading up to the Persian Gulf war of 1991.

Although details are still classified, the attacks, along with laser-guided bombs, are believed to have largely destroyed Iraq’s military command and control capability before the war began.

Begun as a research effort at AT&T’s Bell Laboratories in the 1960s, Unix became one of the world’s leading operating systems, along with Microsoft’s Windows. Variations of the original Unix software, for example, now provide the foundation for Apple’s iPhone iOS and Macintosh OSX as well as Google’s Android operating systems.

As chief scientist of the National Security Agency’s National Computer Security Center, Mr. Morris gained unwanted national attention in 1988 after his son, Robert Tappan Morris, a graduate student in computer science at Cornell University, wrote a computer worm — a software program — that was able to propel itself through the Internet, then a brand-new entity.

Although it was intended to hide in the network as a bit of Kilroy-was-here digital graffiti, the program, because of a design error, spread wildly out of control, jamming more than 10 percent of the roughly 50,000 computers that made up the network at the time.

After realizing his error, the younger Mr. Morris fled to his parents’ home in Arnold, Md., before turning himself in to the Federal Bureau of Investigation. He was convicted under an early federal computer crime law, sentenced to probation and ordered to pay a $10,000 fine and perform community service. He later received a computer science doctorate at Harvard University and is now a member of the Massachusetts Institute of Technology computer science faculty.

Robert Morris was born in Boston on July 25, 1932, the son of Walter W. Morris, a salesman, and Helen Kelly Morris. He earned a bachelor’s degree in mathematics and a master’s in applied mathematics from Harvard.

At Bell Laboratories he initially worked on the design of specialized software tools known as compilers, which convert programmers’ instructions into machine-readable language that can be directly executed by computers.

Beginning in 1970, he worked with the Unix research group at Bell Laboratories, where he was a major contributor in both the numerical functions of the operating system and its security capabilities, including the password system and encryption functions.

His interest in computer security deepened in the late 1970s as he continued to explore cryptography, the study and practice of protecting information by converting it into code. With another researcher, he began working on an academic paper that unraveled an early German encryption device.

Before the paper could be published, however, he received an unexpected call from the National Security Agency. The agency invited him to visit, and when he met with officials, they asked him not to publish the paper because of what it might reveal about the vulnerabilities of modern cryptographic systems.

He complied, and in 1986 went to work for the agency in protecting government computers and in projects involving electronic surveillance and online warfare. Although little is known about his classified work for the government, Mr. Morris told a reporter that on occasion he would help the F.B.I. by decoding encrypted evidence.

In 1994, he retired to Etna, N.H., where he was living at his death.

In addition to his wife and his son Robert, of Cambridge, Mass., Mr. Morris is survived by a daughter, Meredith Morris, of Washington; another son, Benjamin, of Chester, N.J.; and two grandchildren.

Wednesday, June 15, 2011

Introducing the Linux Oracle Enterprise Manager Manual

Welcome to the first edition of the Linux Oracle Enterprise Manager Manual. Oracle Enterprise Manager is supported on multiple Unix, Linux and Windows operating systems on a wide variety of hardware and virtualization platforms. Oracle has consolidated all of the Oracle Enterprise Manager Unix, Linux and Windows operating system documentation within a large documentation library with hundreds of external links to My Oracle Support notes and addendum's. The Oracle Enterprise Manager installation documentation is presented in a consolidated format with the Unix, Linux and Windows operating system installation steps merged into the same sentences and paragraphs. For example, there is not a dedicated chapter or section about the installation of Oracle Enterprise Manager for Oracle Linux. The Linux installation steps are merged together with Unix and Windows. The consolidated format along with the external links to My Oracle Support make the Oracle Enterprise Manager 11g documentation challenging to use for any single operating system.

Tuesday, May 31, 2011

Will Linux Kernel 3.0 be a ground-breaking achievement?

Linux Kernel, an operating system used by the Linux family of Unix-like operating systems, is all set to launch its newest version Kernel 2.8.0 or to be named Kernel 3.0, a report on the ThinkDigit website said.

The Linux kernel 2.x.x series has been in the market for nearly 15 years. The recent Linux versions that have been released are 2.6.37, 2.6.38 and most recently 2.6.39. There have been issues and criticisms about the security and the complex structure of these versions, mainly originating from the complicated numbering system of the versions and their updates, which the company hopes will be fixed in the new 2.8.0 or the 3.0 version.

The new version aims at stabilizing the system for future projects like Linux 3.1 or 3.2, and the security updates will now be coming as 3.1.1 or 3.2.4.

With the kind of success Linux 2.6.x.x has experienced in the last 15 years, it will be a target for the developers to deliver ground-breaking features in Linux 3.0, rather than being just another Linux release.

Tuesday, May 3, 2011

Can Unity create first consumer-class Linux distro?

Linux, from the start, was never about being a consumer desktop.

It was an UNIX-based server operating system that could run on some college kid's PC. Which later could then run a graphical environment. And sound (sort of).

That did not stop people from trying to get it to become a consumer desktop. Caldera OpenLinux--my very first distro--was an early attempt to present ease-of-use to those users who were "less than power." Corel Linux was a better attempt, in that it brought WordPerfect and the rest of Corel Office to the table.

There were others, of course, as Linux got more mature, hardware issues settled down, and apps were created. But nothing seemed to take hold of the desktop market and be more than an IT lover's novelty OS. This was certainly not the case on the server side, which sees stunning success stories every day. But you should see that kind of server success, because that's where Linux excels.

Then there was Ubuntu.

Ubuntu, the Debian GNU/Linux-based distro that eschewed "Linux" from the start, set out to be the world's first commercially successful Linux desktop. That has been the goal of its commercial vendor, Canonical Ltd., from the beginning.

To reach that goal, Canonical has made some decisions that have led us to where we are today: Ubuntu 11.04, also known as Natty Narwhal. Also known as the 1.0 launch of the Unity desktop interface, a new GNOME-based shell that maximizes the amount of content viewing space by shoving toolbars and launch menus out of the way. With an eggplant color scheme.

Yesterday, I read Steven J. Vaughan-Nichols' discussion with Canonical founder Mark Shuttleworth on the merits of Unity, and saw an interesting point that Vaughan-Nichols raised, but did not follow as far as I would have gone. Citing another blog lamenting GNOME 3.0, the "official" new GNOME shell that's out and about, as "Defective by Design," Vaughan-Nichols states:

"GNOME 3.0, like too many Linux/Unix interfaces, was designed by software developers for software developers.."

Unity, on the other hand, was built with Canonical's usability testing and performance goals in mind. Which is why, we have heard Canonical reps explain ad nauseum, Canonical chose to take a different path with Unity rather than stick with a pure GNOME 3.0 environment for Ubuntu.

Yes, the irony in that last sentence is not too subtle.

It is not clear if Vaughan-Nichols' next passage is paraphrasing something Shuttleworth actually said, or if Shuttleworth just built his statement off of a point Vaughan-Nichols made in their conversation:

"Is Unity too simple for power users? Yes, it is. But, as Shuttleworth tells us that's by design. If you don't like simple, consumer-oriented desktops, you'll want to look at another Linux distribution because that's exactly where Ubuntu is now and will continue to go."

And that point got my attention. Do we really want power users to go off and find another distro? Again, it's not clear who actually came up with that notion in the Vaughan-Nichols article, which is why the headline for this article isn't "Shuttleworth tells power users to step off Ubuntu." But no matter where the idea came from, I can't say it's something with which I agree.

Linux has never had a clear separation between "regular" users and "developer" users. The line has always been a bit blurry, which has had the effect of sharpening the learning curve for incoming Linux users. Because Linux was designed by developers for their own use, new Linux users always had to put a little more effort into learning how to use the operating system. And, because Linux was sometimes built without the goals of independent software vendors in mind, it made Linux a challenge for ISVs to jump into as well.

This situation would tend to make one think that having a "pure" consumer distro, then, would be a good thing. After all, lower the barriers of entry and more consumers will come. More consumers, and ISVs will start to want to get their apps in front of new Linux users. More apps, and more consumers--well, you get the idea.

But, despite the success and good works of commercial Linux vendors like Canonical, Red Hat, and Novell/Attachmate, Linux has never been a vendor-only system. The communities around every part of Linux are those power users and developers who like to spend their time ripping the guts out of their systems and tweaking code for the pure pleasure of it. There are such users of Windows and OS X, too--but the difference is Microsoft and Apple don't need them.

Linux distros need their power user/developer set.

In some ways, I think the Linux design decisions made in the past catered too much to this power class of user, which did hold back the success of the Linux desktop.

But Linux vendors like Canonical cannot move too far in the opposite direction just to make only consumers happy. To do so would cut out a significant resource for future development. That balance is the price to pay for being an open source project.

Let's see if Unity can live up to its name for all levels of users.

Tuesday, April 19, 2011

In the beginning: Linux circa 1991

In 2011, you may not “see” Linux, but it’s everywhere. Do you use Google, Facebook or Twitter? If so, you’re using Linux. That Android phone in your pocket? Linux. DVRs? Your network attached storage (NAS) device? Your stock-exchange? Linux, Linux, Linux.

And, to think it all started with an e-mail from a smart graduate student, Linus Torvalds, to the comp.os.minix Usenet newsgroup:

Who knew what it would turn into? No one did. I certainly didn’t. I came to Linux later, although I was already using Minix and a host of other Unix systems including AIX, SCO Unix System V/386, Solaris, and BSD Unix. These Unix operating system variants continue to live on in one form or another, but Linux outshines them all.

The only real challenger in popularity to Linux from the Unix family already existed in 1991 as well, but I’ll bet most of you won’t be able to guess what it was.

Remember this now folks, I may use it another Linux quiz down the road. The answer is NeXTStep. You should know it as the direct ancestor of the Mac OS X family.

The real question isn’t how Linux got its start. That’s easy enough to find out. The real question has always been why did Linux flourish so, while all the others moved into niches?

It’s not, despite what former Sun CEO Scott McNealy has said, that Solaris ever had a realistic chance of making sure that “Linux never would have happened.” Dream on, dream on.

Linux overcame Solaris, AIX, HP-UX, and the rest of the non-Intel Unix systems because it was far less expensive to run Linux on Commercial Off-The-Shelf (COTS) x86 hardware then it was to run them on POWER, SPARC or other specialized hardware. Yes, Sun played with putting Solaris on Intel, three times, but only as a price-teaser to try to sell customers Solaris on SPARC.

In addition, historically Unix’s Achilles heel has been its incompatibility between platforms. Unlike Linux, where any program will run on any version of Linux, a program that will run on say SCO OpenServer won’t run on Solaris and a Solaris program won’t run on AIX and so on. That always hurt Unix, and it was one of the wedges that Linux used to force the various Unix operating systems into permanent niches.

There were other x86 Unix distributions–Interactive Unix, Dell SVR4 Unix (Yes, Dell), and SCO OpenServer-but none of them were able to keep up with Linux. That’s why SCO briefly turned into a Linux company with its purchase of Caldera, before killing itself in an insane legal fire against Linux that was doomed to fail from the start .

It was also to Linux’s advantage that its license, the Gnu General Public License version 2 (GPLv2) made it possible both to share the efforts of many programmers without letting their work disappear into proprietary projects. That, as I see it, was one of the problems with the BSD Unix family–FreeBSD, NetBSD, OpenBSD, etc.–and its BSD License.

Another plus in Linux’s favor was that as it turned out, Linux Torvalds wasn’t just a great programmer; he was a great project manager. Oh, Torvalds can be grumpy, very grumpy, but at the end of the day, after almost twenty-years in charge, he still manages to get thousands of developers to work together on an outstanding operating system. Not bad for an obscure graduate student out of Finland eh?

Monday, April 4, 2011

Cloud Computing Needs Leaders to Restore Optimism

I'm returning to the US this week after spending the past 14 months in Asia, observing Cloud Computing and the nations of the world from this vantage point.

I'm pessimistic, and this view comes after a long life of (potentially deluded) optimism.

A History of Optimism
When Unix showed the way out of choosing between Orwellian mainframe environments and the horrible, clunky little toy called DOS/Windows, I was optimistic.

When CD-ROM brought high-density storage to our little PCs for the first time, I was optimistic. The rise of personal email, followed shortly thereafter by the Worldwide Freaking Web made me more optimistic than ever.

Then, a brief pause while those much smarter than I figured out how to corral Web Services, decoupling and loose recoupling, virtualization, SOA, Ajax, and all of the front-end and back-end scripting and languages languages that come with them into Cloud Computing-and I was optimistic again!

Mood Swing
But today I see a United States that may have finally wounded itself fatally as the world's economic and political leader through a combination of self-indulge, a sense of entitlement, and a media- and entertainment-driven Idiocracy that is winning the tug-of-war against knowledge and intelligence.

I see a developed world that has long resented US power but may now be panicking in the absence of it.

I've also seen a China up close that is nowhere near ready to assume global leadership in any capacity-as if anyone in the US would really want it to. And I've the hundreds of millions in Southeast Asia who wonder what the heck is going on. "I thought Obama was going to fix everything," is a common lament in the region. "I thought Americans were smart."

Is This the Titanic For Real?
So I would like to think we're not arranging deck chairs on the Titanic. Maybe the continuing waves of immigration will continue to keep the country strong and vibrant. The US has a population growth rate that's almost twice that of China (albeit with a much smaller population), with most of that coming from immigration.

But Cloud is not going to resuscitate the US economy and continue to breathe life into the rest of the world unless there is enough national and corporate leadership to make it happen.

Leadership is the most prized commodity among, well, leaders. Without it, organizations and countries fail.

Hellbent for Leather
The word's roots are simple and humble enough, coming from the Old English and German"leder," or "leather." You lead the horses with the leather straps, the leder. And that's it. Nothing profound or transcendent.

We expect our leaders to do much more than hang onto the straps. We expect them to transcend the day-to-day humdrum, to set a tone and create a vision, and Lead with a capital L.

As no less an expert than John Seely Brown, former Xerox Parc director, has said, "A company's top executives should educate themselves about the potential for cloud computing...to prepare for disruption & transformation."

American technology companies have done a fantastic job in assuming leadership in all aspects of Cloud Computing.

But who amongst us believes that the US currently has the political leadership-I'm talking about those in power through the federal and state governments, not just one person-to bring the US out of its worst economy in three decades? I didn't think so.

And do you believe in your company's leadership? Or are you one of those leaders, and do you believe in yourself? I hope so.

Tuesday, March 22, 2011

Oracle gearing up Solaris 11 compatibility

Oracle has been building out the next generation version of its Solaris UNIX platform ever since it acquired Sun last year.

Solaris 11 is available as a developer preview now in something called Solaris 11 Express which came out in November.

Now Oracle is ramping up the Solaris 11 effort with a new Oracle Solaris 11 Compatibility Checker Tool. This is an important step for Oracle and Solaris users.

While there will be new applications written for Solaris 11, the vast majority of available applications will be those that were written for prior versions.

"For more than a decade Solaris has maintained a Binary Compatibility Guarantee, and this guarantee is planned to continue after the release of Oracle Solaris 11," Oracle notes on the Compatibility checker site."However, it's still possible to build applications that, even though they compile and run successfully, are not using OS interfaces properly, or use deprecated interfaces, which may cause the application to break at some point in the future. It's always helpful to find potential trouble spots, adding yet one more way to assure your application continues to run."

Very true.

However, considering the fact that Solaris 11, as was the case with Solaris 10 before it, will have virtual container zones - users need not worry, too much. With Solaris 11, a user could potentially run Solaris 8 in a virtualized container and then run apps written for that ancient operating system within it.

Sure, virtualization doesn't before at bare metal performance, but with Solaris 11, virtualization is supposed to be improved.

Solaris 11 introduces a long list of improvements and some of them could become issues for applications. There are a number high-availability features, including runtime patching which could potentially be issues for apps that require a certain state. ZFS is a wonderful filesystem but hey it could represent a problem for older apps that weren't designed for it.

The right way to make sure that Solaris 11 is ready for prime time is to test apps against it and that's what Oracle is now clearly doing.

Monday, March 7, 2011

Itanium hits 10-year mark, less Windows

Big vendors aren't democracies, so when Kevin Armour, the CTO of Paycor Inc., heard last year that Microsoft was ending support for Itanium, he knew he was stuck.

"I was a little disappointed," he said of Microsoft's decision, which was made a year ago next month.

Armour's three-year-old Itanium platform had proved to be a reliable database platform for his fast-growing business. He had two Hewlett-Packard Integrity servers with eight sockets each and a total of 16 cores running SQL Server and Windows 2003 Server. Armour said he was due for an upgrade and had been pleased with the Integrity systems.

The 64-bit Itanium chip was introduced 10-years ago as a challenger to the RISC systems that dominated enterprise shops at the time. Microsoft ported Windows to the Itanium platform, but when the x86 64-bit chips arrived, first from AMD, "that completely took all of the momentum out of Windows sales on Itanium," said Nathan Brookwood, principal analyst at Insight 64.

Today, analysts from Gartner and IDC estimate that Windows on Itanium makes up no more than 10% of the installed base. Among the systems HP supports on the Itanium platform are HP-UX, OpenVMS and NonStop. In 2009, Red Hat announced plans to drop support for Itanium.

Jed Scaramella, an analyst at IDC, said HP has about 90% of the Itanium market. Itanium servers represented about 7.1% of the market last quarter, or $1.1 billion in worldwide revenue. The Itanium share was 8% in the year-ago quarter.

"Things are moving toward x86 - it's just a question for how long," said Scaramella of the broader trend in the Unix market.

There has been longstanding overall decline in the market share of Unix systems, but it remains a very large market. Unix systems counted for about 26% of worldwide server spending in the last quarter -- $3.8 billion -- declining about .4%, according to IDC.

Intel continues to improve its Itanium processor and just announced an upgrade, code-named Poulson: an eight-core chip. This chip will have 3.1 billion transistors versus the 2.2 billion on the current generation 9300 processor, Tukwila.

Brookwood called Poulson "a massive overhaul" of the chip architecture.

Itanium has what is known as the "six-wide" instruction issue. The chip was designed to exploit parallelism in programs, but the most it could issue were six instructions at a time. Poulson is the first Itanium to be able to issue 12 instructions at a time. "That, in theory, will enable applications to run faster at the same basic clock speed," he said.

But the capabilities of the x86 systems proved themselves to Armour.

Armour moved his Windows Server operating system and SQL Server from HP's Itanium-based Integrity line to HP's ProLiant, Xeon-based DL980 servers with eight sockets and a total of 64 cores.

Armour said his firm's investigation of the Itanium and x86 benchmarks left him confident that the x86 systems, with 512GB of memory each, could handle the workload. That work includes processing payroll and HR services for some 19,000 clients, representing about 700,000 employees.

The x86 systems, which were also less costly, have been running since November. "They are definitely as reliable" as the Integrity systems, said Armour.

In announcing its decision to end Itanium support, Microsoft cited the "natural evolution" of the x86 64-bit multi-core processors, and their improved scalability and reliability.

Microsoft will continue support mainstream support for Itanium to July 2013 and extended support to July 2018.

Matt Eastwood, an analyst at IDC, doesn't see Microsoft's decision hurting Itanium. Microsoft use on that platform has been declining, and "the primary Windows workload, SQL Server, runs very well on x86 systems," he said.

Similarly, George Weiss, an analyst at Gartner, said Microsoft's decision is "of no great consequence of the general position of the platform. But he did note that being able to show that the Itanium supported multi-platforms would have had marketing value.

"We continue to see a decline in Unix over the decade and a zero-sum game in market share among the RISC/Itanium vendors," said Weiss. "Each vendor will be pressed to come up with sustainability strategies for their non-x86 business."

Katie Curtin-Mestre, director for software planning and marketing, Business Critical Systems, at HP, said Windows on Itanium "represents a small percentage of HP's overall Integrity sales."

The changes in Windows support for the Itanium architecture will not impact the roadmap for Integrity hardware or the roadmap for HP-UX, OpenVMS and NonStop running on those servers, she said.

Monday, February 21, 2011

Intel talks Poulson architecture for Itanium servers

Intel has revealed its Poulson micro-architecture for its upcoming refresh of Itanium server processors.

Without giving a release date, Intel said the chips will be fabbed at 32nm scale and have an eight-core design for what Chipzilla claims is improved throughput and greater efficiency.

The Poulson micro-architecture has 3.1 billion transistors per chip and 54MB of onchip memory, a 33 per cent increase in bandwidth speeds and maximum execution width doubled to 12 threads.

Itanium is for mission-critical server applications and Intel pitches it as a platform for mainframe and Unix applications but it has struggled to get consistent vendor support. Poulson will be backwards compatible for sockets and systems based on the Itanium 9300 series processors.

According to V3.co.uk, Rory McInerney, microprocessor development group director at Intel, said, "We believe that we will be able to continue the momentum in Itanium through this decade."

Michael McNerney, director of server planning and marketing for HP business critical systems, told V3.co.uk, "We don't see customers saying: 'I'm an all Itanium or all [Intel] Xeon shop.' We see them breaking it down by workload."

McNerney saw Itanium as much as a legacy support system as something for new server set-ups. µ

Sunday, February 13, 2011

Ken Olsen, Midrange Giant, Dies at 84

Ken Olsen, the co-founder and long-time leader of minicomputer originator Digital Equipment Corporation died last week at 84. Perhaps more than any other man, all of us in the midrange owe Olsen for our paychecks and for the innovation that his engineering mind brought into being.

I was born the week before the first true minicomputer, the PDP-8, was brought into the world. At the time, the PDP-8 was noteworthy because a base configuration cost only about $18,000 with 4 kilowatts of 12-bit core memory and a teletype for input and output. After a bunch of DECies defected in 1968 to create Data General (which was eaten by EMC in the 1990s), arguably one of the best systems to come out of DEC, the 16-bit PDP-11 mini, and in 1976 DEC made the jump to 32-bits and virtual memory addressing with the VAX line, and 1978's VAX-11/780 and its VT smart terminals quickly became the king of the midrange. By the mid-1980s, DEC had added 10 Mbit/sec Ethernet and VAXcluster clustering software to the VAXes, allowing for clusters to be created and to run applications in a shared mode.

When Compaq was busy going broke in the wake of its 1998 acquisition of Digital, one of the secret sauces that Compaq sold off to Oracle was the VAXcluster and TruCluster microcode that was implemented in these VAXes; you know it today as Oracle's Real Application Clusters (RAC) extensions to the Oracle database, and it was in Digital's homegrown Rdb database way before Oracle got its hands on it. So, given all of this engineering excellence, driven by Olsen and some of the smartest people in data processing at the time, it was no surprise at all that Digital grew to over 120,000 employees, had stolen huge chunks of business away from IBM, and was the second largest computer company in the world in the late 1980s, peaking at $14 billion in annual sales.

That pressure from DEC, more than anything else, made IBM focus and create a little thing they called the Application System/400 in 1988. Digital went on to make plenty of mistakes--not taking the threat of open systems and application portability seriously enough until it was too late and ditto for PCs--but its VAX and Alpha processors were second to none and its systems software was also second to none. DEC was an old-school, engineering-driven IT company and it did not see the threat that volume PCs and their processors would have on all systems. Just like Intel, in public at least, is in denial about ARM processors.

My wife went to MIT, where Olsen got his degrees in electrical engineering, and I have walked the halls of the Lincoln Lab where Olsen and his peers, working for the U.S. Department of Defense, did their work in defining what a computer was in the wake of World War II. It must have been a lot of fun to be Ken Olsen most days.

Say what you will about Olsen, but he thought that PCs were toys and that Unix was a bunch of "snake oil" and that Digital's own proprietary operating system, VMS, was superior to Unix and then Windows. His company's vision of clustered minicomputers with smart terminals is not all that different from the cloudy future we all seem to be unavoidably moving toward. Olsen, no doubt, would have said that a graphical user interface was for sissies and that there wasn't anything you couldn't have done with a properly configured VT terminal.

Monday, February 7, 2011

Lieberman Exposes Super-User Activity to SIEMs

Organizations can feel a little more secure that their IT workers aren't abusing powerful user profiles as a result of integration work done by Lieberman Software and Q1 Labs. The two security software companies teamed up to ensure that every use of Lieberman's Enterprise Random Password Manager is tracked by Q1 Labs' security information and event management (SIEM) software.

Lieberman's ERPM is designed to streamline and secure the process of granting IT workers elevated authority on a server or application. ERPM controls access to powerful user profiles, such as ALLOBJ on the IBM i OS or ROOT on Unix, through the passwords that are associated with these user profiles. IT workers can get the authority they need by logging into EPRM, which randomly generates a password for the user profiles in question. The software, which runs on SQL Server or Oracle database, supports most popular platforms, including IBM i, z/OS, Windows, Linux, Unix, Cisco networking gear, major user directory servers, and others.

Liberman already offers its customers the option of requiring two forms of user authentication (including via RSA devices) before ERPM will grant access to powerful user profiles. But with such a treasure trove of corporate resources sitting on the other side of the ERPM wall (one shudders to imagine what a knowledgeable hacker could do if he were granted full access to an IBM i or System z server of a major public company), this is a situation where you almost can't have too many walls, or too much inter-connectedness among security systems.

While there's little question that Lieberman successfully maintains tight security over its customers' delegated domains via ERPM, larger enterprises with big IT security concerns clearly want to view ERPM activities via their SIEMs, those all-seeing, all-knowing eyes in the sky that are charged with detecting coordinated security attacks on corporate information systems.

To that end, Lieberman has embarked upon a concerted effort to get ERPM interfaced to, and certified with, other enterprise security systems. Last year, the Los Angeles company certified ERPM to work with the SIEM from ArcSight, which attracted so much positive attention that was snapped up by Hewlett-Packard last fall for $1.5 billion. It has also integrated ERPM with third-party incident reporting and tracking systems.
Last week, Lieberman announced that ERPM activities will be exposed to QRadar, the SIEM from Q1 Labs, which is another respected developer of enterprise security tools (and one that is now supporting IBM i). According to the vendors, the certification ensures that ERPM can effectively leverage Q1 Labs' LEEF and AXIS "open security intelligence protocols" to identify security threats and anomalies involving powerful user profiles and the passwords that authorize IT workers to use them.

This means that all password check-in and check-out activities, credentials changes, and successful and failed password verifications managed by ERPM are now visible in QRadar, where they can be correlated with other security events in real time. Reporting and auditing elements of ERPM are also now exposed to QRadar.

Lieberman Software president and CEO Philip Lieberman says the integration "closes the loop" on security event management. "With this 360-degree view of security events Lieberman Software and Q1 Labs can show not only what is happening, but also who is behind the activity--effectively ending anonymous access to privileged accounts."

Strong sales of EPRM fueled a strong fiscal 2010, with year-over-year revenues increasing nearly 40 percent, Lieberman said last month. The company attributes the increased sales to a boost in awareness, including the new integration points with SIEM vendors like Q1 Labs and ArcSight. 

Tuesday, February 1, 2011

Berkeley Heights man wins Japan Prize for inventing UNIX operating system

Forty years after they invented the UNIX computer operating system at Bell Labs in Murray Hill, Berkeley Heights resident Dr. Dennis Ritchie and Dr. Kenneth Thompson will receive the Japan Prize.

“I was surprised. I was not expecting this,” Ritchie said in a telephone interview. “It was so far back.”

He explained the two aspects he and Thompson worked on were based on an earlier language. “It did not come out of the blue,” he said. They modified a language that was initially developed at MIT, he said, which later became the C language.

The $600,000 award will be presented on April 20 in Tokyo to both scientists, who will divide it. Ritchie said Thompson flew to Japan for the announcement, but Ritchie sent his response by video from Bell Labs.

He plans to use his part of the proceeds to fly his siblings and spouses to Japan for the event. None of his siblings pursued engineering or science, he said. One brother is a retired superintendent of schools in the Boston area, another brother and his wife run a toy company in the Washington, DC area and he has a sister who has lived in England for many years.

Ritchie, 69, has lived in Berkeley Heights for 15 years. He was born in Bronxville, NY, grew up in Summit and attended Summit High School before going to Harvard University. While there, he attended a lecture on the concept of computers and became intrigued. He shifted his focus from physics to computer programming. He recalled seeing his first computer, which he described as “a big square cubicle box.” He was a graduate student in Applied Mathematics, with a 1968 doctoral thesis on subrecrusive hierarchies of functions. “I like procedural languages better than functional ones,” he has said.

Ritchie joined Bell Labs in 1967, where his father, Alistair E. Ritchie, spent his career. The elder Ritchie was co--author of “The Design of Switching Circuits,” with W. Keister and S. Washburn, an influential book that came out just before the transistor era. Asked if he could have envisioned the rapid technological changes today, Dennis Ritchie said, “I’m not a futurologist.” Ritchie retired from Bell Labs in 2007, but continues as an emeritus staff member.

He met Thompson while working at Bell Labs, now Alcatel-Lucent, in Murray Hill. Thompson, 67, who grew up in New Orleans, had already experimented with a language for personal computers, emphasizing simplicity. Together they developed the UNIX system which became so popular in part because it was distributed to universities and research institutions and became known as “open source” computing. Thompson now works with Google in California.

Both men received the U.S. National Medal of Technology Award from President Bill Clinton as well as numerous commendations for their work.

“Dennis and Ken changed the way people used, thought and learned about computers and computer science,” Jeong Kim, president of Alcatel-Lucent Bell Labs, said in a press release. He added the UNIX system and the C programming language have revolutionized computing and communications, making open systems possible.

The Japan Prize was established in 1985 to honor achievements in science and technology.

Wednesday, January 26, 2011

Play Tetris in your living room with Tetris Link board game

If you look at the earliest computer games, you’ll notice a lot of them were an attempt by programmers to turn an existing card, board or table game into a bunch of ones and zeroes. Pong, for example, is a simplified digital version of Ping Pong: the earliest Unix games were simple text based versions of Chess, Checkers and Go Fish, while even the earliest adventure games like Rogue and Nethack were attempts to recreate pen-and-paper Dungeon and Dragons.

Of course, somewhere along the line, game developers realized they had far more options to exercise their imaginations making games on computers, and now you’d be hard pressed to translate any computer game to a form you could play on a table. 

That’s why I’m fascinated to see this board game version of Tetris called Tetris Link creep out of the latest world Toy Fair. Basically, it takes the Tetronimo concept and applies it to a Connect 4 style gameplay mechanic, in which players must use their tetronimos to link three or more pieces together, while other players try to block them any way they can.

Sure, it’s not really Tetris without being able to wipe out four rows at once by dropping in an I piece, but even so, heck, I’d play this. If only he set played a chiptune rendition of “Korobieniki” as you dropped your pieces.
Read more at Pocket Lint

Monday, January 17, 2011

Xfce 4.8.0 desktop environment released

After nearly two years of work, the Xfce development team has released version 4.8.0 of Xfce. The open source desktop environment for UNIX and Linux platforms aims to be fast and lightweight, while still being visually appealing and adhering to standards.

According to the developers, Xfce 4.8 is their "attempt to update the Xfce code base to all the new desktop frameworks that were introduced in the past few years". The latest stable version supercedes the 4.6.x branch of Xfce and features several advancements, such as browsing remote shares using a variety of protocols (SFTP, SMB, FTP, etc.). Window clutter has also been reduced by merging all file progress dialogues into a single dialogue. 

Improvements to settings dialogues include support for RandR 1.2 in the display configuration dialogue and updates to the manual settings editor. Other changes include the addition of an eject button for removable media, a new 'fuzzy' clock, a new directory menu plugin and improved keyboard layout selection.

A visual tour of the new major visual features in Xfce 4.8 is available on the project's web site – screenshots are also provided.

More details about the latest version of the Xfce desktop environment can be found in the official release announcement and in the change log. Many distributions support Xfce and users can install the updated version by downloading the latest packages. A list of Xfce based Linux distributions can be found on the Xfce distributions page.

Sunday, January 9, 2011

LCA 2011 keynotes: Allman to focus on sendmail

Allman is the father of sendmail, the first mail transport agent for UNIX systems. He will be in Australia later this month as one of the keynote speakers at the forthcoming Australian national Linux conference in Brisbane.

Allman developed the precursor of sendmail, delivermail, in 1981 as an extension to the AT&T Unix code which was available at the University of California in Berkeley. He was studying for a computer science degree.

Sendmail was designed to deliver email over what was then a relatively small ARPANET which had several different smaller networks. Most of these networks had differing headers for email.

Allman's MTA soon became an important part of the Berkeley Software Distribution. It is still widely used on UNIX systems despite being difficult to configure. Alternatives like postfix (written by Wietse Venema), exim (written by Philip Hazel) and qmail (authored by Daniel Bernstein) have gained ground as they are much easier to configure.

But veterans still swear by sendmail. As far as GNU/Linux goes, Slackware, one of the older distributions still uses sendmail as its default MTA though Debian has moved to exim and Red Hat to postfix.

Though Allman, who now works at Sendmail Inc, a company he co-founded in 1998, has also authored software such as syslog (a standard for logging program messages) and several other programs, he is best known for sendmail given its degree of use and the fact that it was the first MTA.

Thus it is not surprising that in his keynote, he will focus on the software that has become synonymous with his name.

"Briefly, my talk is going to explore the architecture of the sendmail MTA from a historical/introspective perspective," Allman told iTWire in an interview. "Like so many other tools, sendmail was originally written as a quick hack to solve an immediate problem; unlike most other tools, it is still around over 30 years later and continues to be one of the major MTAs on the internet."

He said he would first provide an overview of the historic situation. "What were machines like? What already existed in the email world? What was happening that triggered the problem?"

Then would come an examination of design principles and the early days of the evolution of sendmail. "Why did I do it the way I did? How did sendmail change as the world changed?"

Allman then plans to take a somewhat deeper dive into "individual design decisions (as distinct from design principles), including some analysis of whether they were good decisions, bad decisions, or decisions that should have been changed over time."

And to finish, there will be "an overview of what I would do the same, what I would do differently, and what we can learn."

"My hope is that people may be able to take away some knowledge they can apply when architecting a new system," Allman said. "I'm a pragmatist, and a lot of what you read in 'the literature' is disturbingly bogus for actual use in the trenches."

He said that to be honest, there wasn't very much specifically about open source in his keynote.

"The principles I used with sendmail are identical to those I would have used with commercial software - I'm of the school of thought that all software should be written as though it was going to be open sourced, even if it obviously will not, because I think programmers do a better job if they think that others will be evaluating their source code. But that's about as far as it goes."

The LCA 2011 will be held at the Queensland University of Technology from Januarty 24 to January 29.

Tuesday, January 4, 2011

Does CPTN Spell the End for Open Source Software?

It's not often that the likes of Microsoft, Apple, Oracle and EMC jump in to bed together, so when they do, you have to ask yourself what on earth they are up to.

Late last year Attachmate announced its plans to acquire Novell and that as part of the deal it will sell a whole bucket-load of patents (882, to be precise) to a mysterious outfit called CPTN Holdings for around $450 million. All that was known at the time was that Microsoft was behind CPTN, and Novell would continue to own the rights to UNIX.

What we've subsequently learned, thanks to those nice people at the Bundeskartellamt, Germany's national competition regulator, is that Apple, Oracle and EMC are also involved in CPTN.

The Open Source Initiative (OSI), a non-profit corporation that educates about and advocates for the benefits of open source, is so concerned about this unholy alliance that it has asked the Germans to investigate the transaction. Why the Germans? Perhaps because they have a record of imposing multi-million Euro fines on Microsoft (NASDAQ: MSFT) for anti-competitive behavior in the past, or maybe because the Bundeskartellamt was open to receiving comments about the transaction from the public.

Among the points the OSI has raised with the Bundeskartellamt are that:

* CPTN represents a serious threat to the growing use of open source software because its founders and leaders have a long history of opposing and misrepresenting the value of open source software.
* CPTN principals have substantial market power in operating systems (Microsoft, Apple and Oracle), middleware (Microsoft and Oracle), and virtualization and cloud computing (Microsoft, Oracle and EMC). Open source is a significant competitive threat in operating systems (Linux and Android), middleware (Apache and JBoss), and virtualization and cloud (KVM and Xen hypervisors).
* CPTN creates a cover to launch patent attacks against open source while creating for each principal a measure of plausible deniability that the patent attack was not their idea.
* The patents CPTN bought could be sold to non-practicing entities (NPEs), which could then create havoc for open source software without risking the adverse reaction of the market if Microsoft or one of the other three were to sue directly

Now let's not forget that as yet no one outside of Novell and CPTN knows exactly which patents make up the 882. What we do know is that they don't include anything to do with direct ownership of UNIX. We can also speculate that they are connected with networking, virtualization and data center technologies -- three areas in which Novell has been heavily involved. Bear in mind the words of Gregory House, however: "The problem with speculation is it makes a speck out of you and some guy named Lation."

So going back to the original question of what on earth Microsoft, Apple, Oracle and EMC could be up to, the OSI's worry is that they plan to hide behind CPTN while launching patent attacks against open source software directly, or through third parties to which CPTN sells patents. The reason OSI believes the four may do this is that they have a history of open source software, and because open source software poses a threat to them in the fields of operating systems, middleware, and virtualization and cloud.

I for one don't buy that for a moment. Open source software certainly competes with products these four companies sell, but the point that OSI's argument ignores is that these companies also compete with each other -- ferociously at times. The idea that Microsoft, Apple, Oracle and EMC all have their backs to the wall, trapped together like dogs awaiting the final coup de grace from the open source community, and their only hope of salvation is to band together to become patent trolls, is, frankly, rather preposterous. Open source software is great, but it's not that great, and Microsoft, Apple, Oracle and EMC are far from being technology dinosaurs with nothing but their patent portfolios to profit from.
Related Articles

What other possibilities are there then? Maybe Microsoft wanted to gets it hands on some of the patents, but it knew that attempting to buy them alone would be impossible for anti-trust reasons, so it came to "an arrangement" with the other three? Possibly.

Perhaps the four bought them out of the kindness of their hearts, to license for a nominal fee to all and sundry for the benefit of the community as a whole? Seems unlikely.

Or, perhaps all four companies wanted to avoid a bidding war and agreed to share ownership of them to guarantee access to the technologies they represent at a low price. John Paczkowski over at All Things Digital reckons an insider at one of the big four confirmed just that. "We get to buy in at a cheap price and get a license to a very valuable portfolio. It's cheap defensive insurance," the unnamed individual told him.

It's certainly plausible: a defensive move that keeps the playing field even. Microsoft, Apple, Oracle and EMC working together in this way is nothing the OSI should be worked up about. Probably.