Linux

Discussion in 'Off Topic Area' started by SpikeD, Aug 16, 2011.

  1. aikiwolfie

    aikiwolfie ... Supporter

    If you're an OEM like Dell selling desktops and laptops you'll want to be in the logo program since it's about more than just getting a sticker. How much work is involved in getting a set of keys for Linux depends on how secure boot is implemented.

    The proposal at the moment is the OEMs will issue the keys themselves. Which unless there is a disable option then there will be a lot of work involved for the distribution developers. Particularly the smaller independent developers.

    It's not just GNU/Linux that's affected. Projects like Haiku, Minix and the various flavours of BSD would be prevented from running if there were no disable option. Legacy OS's like XP and MS-DOS which some companies simply can't do without will also be affected.

    So while some people might be over reacting. It is an important question that needs to be answered. Given that the likes of Dell are heavily involved in Linux. Some distributions will continue to function just fine even without the disable option. Distributions like Red Hat/Fedora and Ubuntu will survive. Something like Arch Linux or Puppy Linux might be left out in the cold though. Consigned to legacy hardware only.

    There is also an ideological dimension to the argument. And that is who owns your PC. If you own your PC why should anybody else have the right to tell you what software to run? It's a fair bet the loudest voices have an agenda to push their ideology.

    The problem with your examples though is we're already have way there anyway. In my experience most companies big enough to even think about this as an issue are already using application servers and centralised storage. Which means if the network goes down, then the vast majority of employees in the company are left twiddling their thumbs.

    Basically the vast majority of employees with workstations at their desks should really be using thin clients with the servers doing the grunt work.
     
  2. LilBunnyRabbit

    LilBunnyRabbit Old One

    I'd say that your data processing needs are somewhat different to my own. When you've got different users all requiring data processing on the same set, it's much better to parcel it out to individual user stations rather than to try and lump it all onto a single server.

    The 'true' cloud you're talking about isn't resilient to failure. For one thing you've got a single point of failure in most company's internet connections, then you've got the cascading failures which have hit Google, Microsoft and Amazon clouds. Now, while these failures are less regular than local failures of equipment the problem that you have is that a single cascading failure will take down your entire suite of cloud services, whereas one server blowing up locally only takes down one.

    I'm not denying that there are advantages to the cloud, but there are cons as well.

    Well no, locally they're generally hooked up to several central servers with resiliency in place, and connection resiliency. Even with a simple failover cluster if one set of servers/switches goes down you'll still have provision. Meanwhile with the cloud if either their datacentres fail in a manner which cascades, or your connection fails, you've lost everything - and you have no control over the fix.

    It can always be disabled though - so where's the problem? And bear in mind Dell also sell servers, and tend to use the same firmware across the board. They're not going to create a set of firmware separately for their desktops, and they're not going to force servers to use secure boot because businesses will simply go elsewhere.

    And there'll be a disable option. I still find it amusing that one of the main benefits espoused by OSS-followers is security, and now they're actively fighting against a security measure.

    Which is why there'll be a disable option. What's the problem?

    Again - there'll be a disable option. What's the issue?

    I own my PC - why shouldn't I have the option to use secure boot?

    A network is not a single entity. About the only thing that will take down an entire (well-designed) corporate network is a site-wide powercut, which the cloud does nothing to prevent. Of course badly designed networks are more vulnerable.

    A server powerful enough to provide a lot of data processing to a lot of employees is a lot more expensive than one workstation per employee and a server powerful enough to provide the data. The centralised server should be providing data, and running batch jobs. The workstation can handle any local data processing quite happily. You turn that workstation into a thin client and your provisioning in the server room, and the costs of it, are going to skyrocket.

    Also with your thin client model, a single failure will take down everyone - even for local work. Office applications and similar will survive a server crash without the user noticing, if they're running locally. If they're running on the server then again you've just lost everyone's service provisioning.
     
  3. aikiwolfie

    aikiwolfie ... Supporter

    I haven't as yet seen any OEMs putting peoples fears to rest yet. So I guess we'll just have to wait and see.

    Your experience of workstations is different to mine. When our network goes down everything goes down. MS Office doesn't even work. More importantly we don't have access to the main customer database. Which means even if Office did work. It'd be useless. Without the network we can't log in to a PC.
     
  4. LilBunnyRabbit

    LilBunnyRabbit Old One

    That doesn't make sense - MS Office is a local installation, unless you're using a thin client, so shouldn't be affected by a network failure. The only instance (other than with a thin client) where I can see it failing would be if all storage is centralised, rather than being local and synced to a central store in real-time.

    Then again it sounds like your network has at least one single point of failure - not a good thing. Just as a point though, how exactly would the cloud save you from this, or thin clients? They still require connectivity.
     
  5. aikiwolfie

    aikiwolfie ... Supporter

    Storage is completely centralised. If an Office app like word is already running it still works. Although clearly if the network is down it won't save to the shared storage. On the other hand if the network is down and you then start Word we get an empty application window with no menus or buttons or anything.

    A thin client doesn't solve that problem. Although if the company thought it would lose all functionality it would do more to avoid network issues.
     
  6. LilBunnyRabbit

    LilBunnyRabbit Old One

    There's still the problem that local applications don't work like that. I think your company has more important things to worry about than just their network failing - starting with hiring some good IT staff.
     
  7. aikiwolfie

    aikiwolfie ... Supporter

    There is. I suspect Office apps are freaking out because they're looking for something that's on the network that they can't access because the network is down. In each of our profiles for example there's an Apps folder which holds settings for all of our applications. Local and networked. Profiles are held on a network share.
     
  8. LilBunnyRabbit

    LilBunnyRabbit Old One

    That shouldn't matter, unless you're hot desking a lot. Roaming rofiles are locally cached by default when you log on to a machine, so even if it can't access the network you should be able to log on and use locally installed applications.

    Possibilities are that the IT team have set up the domain to remove or prevent local profile caching.
     
  9. aikiwolfie

    aikiwolfie ... Supporter

    We hot desk all the time. In a typical day I can easily have logged in and out of 4 or 5 PCs.
     

Share This Page