Thursday, 28 November 2013

GIP the best PowerShell alias

I recently discovered the PowerShell alias GIP which runs the Get-NetIPConfiguration cmdlet.
This is the new ipconfig for PowerShell.
Here is the example for the straight gip command.



Here is gip with the -detailed option shortened to -det.


Isn't that great!
It might not be THE BEST PowerShell alias, but it is up there.   I think GCM would take that crown.

[Update] I just did a search and found a TechNet blog article about GIP and also TNC. TNC is an alias for Test-NetConnection and looks like a great alias also.   Read the article New Networking Diagnostics.


Thursday, 14 November 2013

Remove SkyDrive and Homegroup from Windows 8.1 File Explorer

Today I got sick of scrolling past SkyDrive and Homegroup nodes in the left pane of File Explorer in Windows 8.1.   I did a search and found a good post on Lifehacker and another post from a forum on how to remove them from the Explorer window.

Removing SkyDrive

I am adding the details here for my own reference but here is the link to the article from Lifehacker titled How to Get Rid of SkyDrive in Windows 8.1 Explorer.

The root of the change is to set a registry key called Attributes to a value of 0.   There is one problem though. The parent key called ShellFolder does not give read/write permissions to user accounts.   You will need to take ownership of the ShellFolder key and give your user account read/write permissions to change the Attributes value.

The key of interest is
HKEY_CLASSES_ROOT\CLSID\{8E74D236-7F35-4720-B138-1FED0B85EA75}\ShellFolder



Removing Homegroup

This one is a little easier.   Make sure you are not a member of any Homegroups and then disable two Homegroup related services in the services.msc utility.

The services are;

  • Homegroup Listener
  • Homegroup Provider

After you have made the above changes simply reboot the machine and the explorer windows will look like this;


Ah, that's better. No SkyDrive or Homegroup tree nodes.


Friday, 25 October 2013

Installing Harp on Debian Jessie

I recently came across the new open source web server called Harp.
Here is the description from the Harp website;


What is Harp?

Harp is a zero-configuration web server that is used to serve static assets. It has a built in asset pipeline for serving .jade, .markdown, .ejs, .coffee, .less, .styl as .html, .css, and .js. Harp supports a template agnostic layout/partial system and metadata for dynamically building files.


I thought I would take Harp for a drive around the block and decided to install it on a Debian Jessie virtual machine.
The installation process is rather easy except for one issue due to the Node.js package.

Here is the process to install Harp.

Firstly there are some prerequisites to get Harp installed being Node.js and npm, the package manager for Node.   I decided to install Node and npm using the Debian package repositories with these commands.

apt-get update
apt-get install nodejs npm

Once Node is installed, you can install Harp using the Node package manager.

npm install harp -g

The -g switch in the above command tells the package manager to make the Harp install global rather than a local directory install.

Harp is now installed and everything should be ready to go!   There is a problem though.   If you run the following command.

harp --version

You will get an error which is very misleading.

/usr/bin/env: node: No such file or directory

You can be forgiven for thinking that the harp binary was not found.   This is not the case.   The problem here is Harp is trying to call Node.js by using the command 'node' while on a Debian system the Node command is 'nodejs'.

This is easy to fix with the following symbolic link.   Simply run this command.

ln -s /usr/bin/nodejs /usr/bin/node

Now if you run Harp everything works as expected.

harp --version
0.9.4

All that is left is to follow the instructions on getting started to use the Harp web server.


Wednesday, 16 October 2013

A PowerShell Script to Warm Up or Wake Up SharePoint 2013

I was discussing SharePoint warm up solutions with some colleagues today and reviewed some of the solutions on the web.

The reason SharePoint needs warming up is because the first time a page is accessed in SharePoint the Just-In-Time compiler creates native images of the ASP.NET code within SharePoint.   Unfortunately this compilation needs to be carried out once a day due to the recycling of the Internet Information Services (IIS) World Wide Web Worker Processes (w3wp.exe) that host the SharePoint applications.

I decided to try my hand at writing one in PowerShell. Here is a simple solution.

Access the Gist here: https://gist.github.com/grantcarthew/7000687

Tuesday, 15 October 2013

A Telnet Client written in PowerShell

A little while ago I started writing a telnet client in Microsoft's PowerShell.   After it sat unfinished for a time I finally got around to improving it with multiple threads and cleaning it up.   It now works well so I decided to post about it.

Why would I write a telnet client in PowerShell?   Just for fun mainly.   It has been a good project to learn some more about PowerShell and may be of use for automating the configuration of Cisco routers and switches or running scripts on other servers.

The most interesting thing I learned as I worked through the project was about how to get PowerShell to support multiple threads.   Using the low level .NET Framework System.Net.Sockets.Socket class added to the complexity.

To start off with I created the telnet client using a "while" loop that ran continuously and caused the script to use up 20% of the CPU while doing nothing.   I couldn't fix this with a sleep timer because it made the client unresponsive.   The problem was I needed to respond asynchronously to reception of data through the TCP connection, and respond to user input at the console.   Very easy to do with C#, but in PowerShell?

To implement the reception and transmission of data at the same time asynchronously I started by trying to use the Asynchronous Programming Model used in the .NET Framework.   This is a little tricky because the tread pool used in PowerShell is different to the thread pool used in .NET.   I did find a way of using the async callback methods with PowerShell from a blog post by Oisin Greham.   I still had issues trying to get this to work though.

I gave up on using the async methods of the Socket class and started looking for alternatives.   It would have been nice to use the Register-ObjectEvent cmdlet and other event cmdlets but the Socket class does not have any publicly visible events to consume.

I briefly looked at the PowerShell Jobs cmdlets, but they didn't work well for this application because they use the remoting subsystem which serializes objects when they are passed between jobs.   This means passing an object by reference is not possible and I need a reference to the connected Socket.   That's when I came across the concept of creating a new PowerShell object using [PowerShell]::Create().

When [PowerShell]::Create() is called from a PowerShell script or console, a new instance of PowerShell with an empty pipeline is returned for you to make dance and sing any way you like.   The beauty of this new PowerShell object is you can pass objects by reference meaning I could pass the connected Socket.

So now I have two threads to use in my PowerShell telnet client.   The main PowerShell process creates a child PowerShell process and initiates it with a script to receive data from the socket.   After initiating the child a "while" loop is used with a blocking call to the $Host.UI.RawUI.ReadKey() method to wait for user input.

Rather than explain the code in any more detail, I will let the code do the talking.   If you want to use this code use the Gist link: https://gist.github.com/grantcarthew/6985142


Thursday, 12 September 2013

Simple Git Workflow for PowerShell Script Development

I create a lot of PowerShell scripts and source control comes in very handy for replicating the scripts to servers and managing versions and branches.

I discovered Git many years ago and it works really well for managing PowerShell scripts.

This blog article is a reference for the commands to manage a script repository.

The first thing you need to manage PowerShell scripts using source control is Git.   My favorite package for working with Git is the portable version of Git for Windows called msysgit.   Make sure you download the portable version, although there is nothing wrong with the fully installed version of Git.   There are many other popular 'Git for Windows' packages such as GitHub for Windows and Posh-Git.

Once downloaded, extract the files and run the 'git-bash.bat' file to get to the *nix based prompt.   Notice you will need to use forward slashes / rather than back slashes \ in your file paths including UNC paths.   Other *nix based tools are also available here such as 'ls -l' and 'rm' etc.


Once you have Git portable running you will probably need to copy it to any server or workstation that you will be writing scripts or creating repositories with.

The first thing you need to do in the Git bash prompt is tell Git who you are.   This will save your user name and email address for assignment to any commits you make withing a repository.   You do this with a couple of git config commands. Whilst you are configuring Git you may as well set the default push mode to simple as described here, otherwise a message will be displayed stating the default is changing in Git v2.0.

git config --global user.name = "Grant Carthew"
git config --global user.email = "your@emailaddress.com"
git config --global push.default simple

You will need to run these commands on each workstation or server you intend to run Git on.

When you run the global config commands a .gitconfig file will be created in the root of your profile.   This .gitconfig file holds simple text configurations.



Now that you have Git configured for your workstations and repository file server, let's create a repository to host our PowerShell scripts.

On a file server create a shared folder where you wish to host your Git repositories.   I tend to create a folder called GitRepo.   This folder will be accessed using a UNC path such as \\FileServer\GitRepo.

Now on the file server using Remote Desktop, run the Git bash and change directory to the GitRepo root folder.   Within this folder create a new folder to host your PowerShell scripts and place a '.git' at the end of the folder name.   The '.git' extension is just a convention used for bare repositories.   Change directory into the new folder.   See the example in the screenshot below where I am creating a Reporting.git repository.


Now the fun part, time to initialize a Git repository.   The best type of repository to create on a central file server is a bare repository.   A bare repository does not host a working tree which is the Git name for the script files in the root folder.   The files in a bare repository cannot be edited because they are hosted in the Git object store.   One simple command will make a bare repository.

git init --bare


That's it.   You now have a starting point for hosting your PowerShell scripts.   Open the folder in explorer if you wish to see the folder structure that has been created by the init command.


There is nothing further to do on the file server.   Disconnect your remote desktop session to start working on your workstation.

Run the Git bash prompt on your workstation and change directory to the folder you would like to use for script development.   Do not create a folder for the new repository, git will do that with the clone command.

Now that you are in the correct folder in the Git bash, run the following command to clone the file servers repository.

git clone //FileServer/GitRepo/Reporting.git

This command will clone or copy the repository from the file server into a new folder.   In this case I am using a folder called Reporting.   Notice the '.git' extension is not included in the new folder name.   There will be a warning about cloning an empty repository which you can ignore.


If you open the new cloned repository in explorer you will see a '.git' subfolder which is where Git hosts all versions of files in the repository and other configurations.   If you ever want to remove Git from this folder, simply delete the '.git' subfolder and it is gone.


Now it is up to you to develop a heap of wonderful PowerShell scripts to weave magic across your server farm.   Once you have created the scripts in the new repository folder, it is time to commit them into Git.

One of the best commands in Git is the git status.   Run git status now and you will see the new script files you have created in the repository as untracked files.

git status


As untracked files they are not part of the repository and need to be added to Git.   This can be done using the git add command.   The easiest way to add all the new/changed/deleted files into the repository is to use the git add command with the parameters below.

git add . -A

The above git add command uses the file pattern of '.' meaning all files and folders.   The '-A' parameter is short for '--all' and tells git to add all files including removing deleted files.   The git add command will not save changes back onto the repository just yet, they are only staged.   Run a status command again to see the files staged.

git status


Now that the new script files are staged we can commit them into the master branch of the repository with the following command.

git commit -m "Initial commit for the Reporting repository"


The changed or new files are now committed to the Git repository on your workstation.   They have not been saved back to the file server yet.   Let's run the status command again just for kicks.

git status


As you can see from the git status report, Git now sees the working tree as clean because all the files in the directory are committed.   The git status command is great for letting you know if you have changed files.

You can keep working on your workstation script files if you wish but I tend to like pushing the changes back to the file server after every commit.   Because we cloned an existing repository from the file server there is no need to tell Git where the original repository is located.   To push the changes up to the file server simply type in this command.

git push origin master


From now on you can simply run 'git push' without the 'origin master' parameters.   You can continue working on the script files and commit the changes up to the file server following the same workflow above.
Here is a git status with a new file and a changed file.


So to add these changes into the local and remote (origin) repositories we do the following.

git add . -A
git commit -m "commit message"
git push


Up to now we have been building the repositories and developing the scripts.   To use the scripts on a target server you have some PowerShell Remoting options or you could get a copy of the scripts onto the servers.   I find a handy way of working with PowerShell scripts is to store them in a location on the server that is included in the Path environment variable.   This way you can call your scripts as if they are cmdlets.
To get a copy of our scripts on to the target machines we can simply clone the file server repository.

git clone //FileServer/GitRepo/Reporting.git



Once the files are cloned onto a server you can run the scripts or update the scripts creating more commits and pushing the changes back to the file server.   Note that Git is a fully disconnected utility so it will not tell you if the original repository, the one on the file server, has updated files.   Before you run any scripts in a target server repository, always run Git and do a git pull to bring down the latest changes from the origin.

git pull

In the following screenshot of the git pull command you can see I have merged the 'Usage' scripts into a 'Publish-ResourceUsage' script which involved deleting three scripts and renaming one.


All of the above work within Git is using what is called the 'master' branch of the repositories.   You can create branches for major changes without effecting the 'master' branch.

I am not going to explain branching in much detail here.   These are the commands you would use for branching just as a quick reference.

git branch webreports
git checkout webreports
*make crazy changes to your scripts*
git add . -A
git commit -m "Created the new web reporting script"
git push origin webreports
*happy with your changes?*
git checkout master
git merge webreports

It probably goes without saying that Git is not designed for PowerShell scripts.   It will work with any files you want to manage.   Once you start using it and become comfortable with the commands you could use Git to manage all of your scripting and programming source code.

In summary, Git has saved my bacon many times when working with large numbers of scripts and other files.
If you adopt Git two things will happen, you will start to love it, and you will be more efficient at managing your PowerShell scripts.

Lastly, be aware there are public Git hosts you can use for free.   Each have a subscription model.   I use GitHub for public hosting although I am not using it much yet.   For private Git repositories I use BitBucket.

[Edit 1] - I just read through this tutorial on Git and it is one of the best ones I have seen.   It approaches Git from a real world point of view and ignores some of the more complex features.


Monday, 2 September 2013

Windows Server 2012 Core Keyboard Input Method Changes

Here is an issue I just had with Windows Server 2012 installed as a server core system.   For those not familiar with a core install, there is no desktop with just a command prompt for managing the system.

As it turned out on this system I was trying to manage, the default keyboard input was set to the UK keyboard.   If you are using a US keyboard with a system set to the UK keyboard, some of the keys will be mapped incorrectly.   A very good example is the pipe symbol.   I was trying to use PowerShell to configure the server and discovered I could not pipe command objects.

Easy fix I would have thought only with Windows 8 and Windows Server 2012, Microsoft has moved the keyboard input configuration into the Metro start screen.   A server core does not have the Metro interface at all.

After some searching I found the best way to fix the problem.   One registry key change.
You can run the registry editor on the server core installations.   So simply run regedit.exe and navigate to;
HKCU:\Keyboard Layout\Preload and change the key titled "1" to the desired input method.   US is 00000409.

Here is some more detail about the registry keys;
http://support.microsoft.com/kb/102987/en-us


Tuesday, 27 August 2013

Public BTSync Secrets

Following is a list of BitTorrent Sync secrets I am using to share files with colleagues;

SharePoint Support Files
Read only secret: B3XNA62PYSAAH7LGKCIORW536DVG3OME2
This repository hosts information and files related to installing, configuring, maintaining, and developing for Microsoft's SharePoint.

Lync Support Files
Read only secret: BXJDISWQTV6S2MJKVCK3HBV3DDPXZR4TG
This repository hosts information and files related to installing, configuring, maintaining, and developing for Microsoft's Lync Server.

PowerShell Support Files
Read only secret: B5CQA6PNTYAMQCJBUBNB32XQV4WJGL2YQ
This repository hosts resources for working with Microsoft Windows PowerShell.

If you would like to get read write access to any of the above repositories to add to the resource, please contact me and I will send you the secret.

This list will expand as I add more repositories.


BitTorrent Sync Thoughts

Update: Don't use BTSync. Checkout SyncThing instead.

As I broaden my usage of BitTorrent Sync (BTSync) I am coming up with more and more ways of making my life easier with this brilliant tool.

Here is a quick list of ways to use BTSync;
  • Sync your photos from your phone to your desktop
  • Sync your music from your desktop to your phone
  • Backup your files
  • Deploy software development solutions such as websites or script files (PowerShell).
  • Share files with family and friends
  • Support your family and friends remotely (screenshots etc.)
  • Share files with customers
  • Sync your installed games to multiple machines

One of the problems I had when first using BTSync was the naming of the folders and where to store them.   After using it for a month I came up with a good name for the root folder where I place all my synchronised folders.   I now have a folder called "Pool" and inside are all of the folders being synchronised.


This Pool folder is used for simple synchronised folders while I have backups and other large synced folders stored elsewhere.

BTSync is only new and I started using it with a little apprehension.   Because of a build up of trust in the stability of BTSync over the last month I decided to start synchronising a large folder on my home file server.   My file server is running on a Raspberry Pi, so I had expectations that the indexing (hashing) process would be rather slow on a 900GB directory of pictures, music, documents and more.   As it turns out, the btsync daemon is crashing often.   It looks like the software package needs more resources than the Pi can deliver which is a real shame.   I am hoping bugs like this will get fixed as the product matures over time.

If you haven't given BTSync a spin yet, give it a go.   You may find it saves you time and gives you access to your files in ways you never thought of.

If you have thought of a unique way to use BTSync, let me know about it in the comments below.


Monday, 26 August 2013

Determine the Active IP Address on a Windows Machine with PowerShell

If you have ever had a need to install multiple NICs onto a Windows machine, or use a Virtual Technology like Virtual Box, it can be difficult to determine your local network IP address because of all the logical network interfaces.   It is easy enough to run the ipconfig /all command and see the results but to determine which IP address is being used to access remote networks by using a script requires a little work.

The best approach I could come up with was to use the Windows routing table.   If you run the "route print" command on a Windows machine you will see the default network interface that is being used to route traffic to remote networks. Using this information you can determine which IP address is being used as the source address when accessing remote networks.

I had a need to detect the local subnet on a Windows 7 machine and came up with the below script. It will return a PSCustomObject populated with the active network interfaces details.

Using three WMI classes being Win32_IPv4RouteTable, Win32_NetworkAdapter, and Win32_NetworkAdapterConfiguration I could determine the information I was after.

Firstly I retrieved the Windows routing table filtered to the default routes. This gets sorted by the metric and I select the route with the lowest metric. This gave me the InterfaceIndex of the active network interface. Using this index number I could retrieve the network adapter and its configuration.

To use this script either copy the text below into a new function or, as I do, copy it into a ps1 file saved in your path and call the file directly. Here is the Gist link: https://gist.github.com/grantcarthew/7000687



Thursday, 18 July 2013

BitTorrent Sync - My new favorite Private Cloud File Sync Tool

If you have been following my previous posts you will see a trend of late. I am heavily into file access and file storage for my personal use. I thought I had come up with a great solution for my own Private Cloud Storage which involved a Raspberry Pi, Samba, AjaXplorer and Deluge.

As of this morning I was informed that a product I was very interested in, but did not think suited my needs, now has an Android client. I am talking about BitTorrent Sync (BTSync).

When I first looked at BTSync and played around with it as a possible solution for my file access, I dismissed it because I was more interested in getting access to my file server rather than syncing to my devices. Using AjaXplorer I can access my home file server from anywhere with ease. What AjaXplorer lacked at the time was a desktop sync client.

Now that there is an Android client for BitTorrent Sync, I have the ultimate Private Cloud solution. On top of the Raspberry Pi, Samba, AjaXplorer and Deluge I now have BTSync to synchronise selected files from my home file server to my laptop, desktop or mobile phone. BTSync complements AjaXplorer rather than replaces it. AjaXplorer provides a beautiful web interface to all the files on my file server while BTSync keeps my frequently accessed files at my fingertips.

Better yet, on top of BTSync synchronising my frequently accessed files I have configured the new Android BTSync client to synchronise my phones DCIM (Digital Camera IMage) folder to my file server and desktop. Now if I take a photo with my phone it is automatically available on my file server for my wife to access and on my desktop for me to manage. Using my desktop I can delete poor pictures and generally manage my photos and the changes will be synchronised.

What about backups? The BTSync client has a trash folder and an archive (versioning) folder, but nothing beats a manual backup. Once a week I backup my file server to an external USB hard disk drive. I still have a Dropbox account and have decided to use it for only one purpose. I have my Android Dropbox client configured to automatically upload photos. I will forget about my Dropbox account unless there is some need to access the uploaded files.

I am not going to post on my blog about how to install BTSync because it is so easy to do on a running Debian system. See this forum post for details.


 

Thursday, 13 June 2013

Private Cloud Storage

I am a big fan of Dropbox.   I have been in the information technology industry for well over a decade now and my first smart phone made me very interested in the cloud storage solutions available.   It's not until you have a modern mobile phone that you realize how handy it is having your data available whenever and wherever you are.

So with my new HTC Incredible S purchased December 2011 I installed Dropbox.   I then proceeded to install other Dropbox clients on my Windows desktops and Linux desktops.   Low and behold my files just appear everywhere at the same time, perfect.

Then I learnt about Evernote.  At the time I thought Evernote was handy and complemented Dropbox quite nicely.   After using Evernote for a year or so and learning how to use it properly, I discovered the benefit of building an information architecture based around the Tags in Evernote.

So here I am in the year 2013 with some experience on personal cloud storage and I have found a few interesting observations about my use of these services.   Firstly, Dropbox is less important to me than it was to start with.   In fact when I look back now, it was never really that useful to me on my phone.   I found it very useful on my Desktop but not my phone.   Secondly, the information you want on the run is normally text.   Evernote is absolutely brilliant for handling text notes and photos that you want to use for research or reference.

I should add at this point that I don't like spending money on services if I can avoid it.   Call me cheap if you wish but I am a dad with four kids and my beautiful wife stays at home running the household and home schooling the bunch.   My children, wife, house and life in general will always be a higher priority for me than cloud storage and hence the finances get steered in that direction.

This is where Dropbox and Evernote now need to be compared.   Both Dropbox and Evernote are businesses and need to make money.   Both services need to attract customers to make their service popular and get support from third parties making their services even more valuable.   Dropbox chose the path of storage amount restrictions to cause customers to pay for their service while Evenote chose the path of storage traffic restrictions.

With Dropbox you get a fixed amount of storage for free.   At the time of writing this post that amount is 2GB.   You can get up to 18GB of storage for free if you invite a lot of people to use Dropbox.   If someone installs Dropbox from an invite you sent then, both you and the person you invite will get an extra 500MB of free storage.   If you use the Dropbox link at the very top of this post and become a member of Dropbox I will get an extra 500MB as will you.

Now the business model that Dropbox has is not bad.   If you love the service and need it for work or storing large amounts of data then you pay for a subscription.   If you don't need to store a lot of data, it is free.   Over years of use you will find they increase your storage quota also.   The problem is most people have many gigabytes of data stored somewhere and it will cost them money to store this in the cloud.   An example is my CD collection.   I ripped all of my CDs many years ago and having a little OCD when it comes to information technology I used a lossless codec.   This comes to 82GB of storage and only includes my music.   Add to that the videos, pictures and other documents it is going to cost me $500 a year to store it in the cloud with Dropbox.

How about Evernote?   Well Evernote could have chosen the same storage limit business model as Dropbox but they didn't.   They chose, in my opinion, a much better solution.   Evernote only charges you for traffic to their site and it is only accumulated monthly.   At the time of writing this post you can upload 60MB of data per month for free.   My quota used today is 2.2MB or 4% and this will be reset in 9 days.   What this means for someone like me is that the service is totally free and is unlikely to ever cost me money.   Does that mean that I am a leech on Evernote and not contributing to it?   Definitely not.   By using Evernote and blogging about Evernote and showing people Evernote to explain information architecture for SharePoint content types I am giving them free publicity.   I am making the product more popular.   I am one of the many millions of people who have the app installed on my phone adding support to its popularity.   Evernote benefits by supporting me as a free customer.

So where are we?   Well you have gathered by now that I am a big fan of Evernote and still a fan of Dropbox.   But I don't need Dropbox.   I have a 4TB USB hard disk attached to a Raspberry Pi at home and it has all of my files stored on it.   I have another 4TB USB disk I use to backup all of the files in case of hardware failure.   This file server storage I have setup has cost me only a minimal amount.   The Western Digital USB drives cost $198 each.   The Raspberry Pi cost $50 with a case.   So for under $500 I have massive amounts of storage that will last for a few years at least.

All of this led me to research private cloud storage.   I don't need Dropbox to store my files.   What I need is access to my files from anywhere.   I started researching open source private cloud storage solutions with the intent of replacing Dropbox but I learned after installing a few of them that I didn't want file sync on my phone.   I just want to be able to access the files.   Similarly I do want file sync for my desktops.

I started my private cloud storage research with ownCloud.   I tried very hard to love this open source product but found it very buggy.   In fact it has so many bugs it is unusable.   So I started looking at other solutions.   This Reddit post helped a lot with my research.

Here is a list of private cloud storage solutions I looked into or installed and a brief comment on each.
  • ownCloud 
    I installed this and found it buggy and poorly programmed for its primary task.
  • git-annex
    I am a developer and LOVE git, but this isn't quite what I wanted.
  • Seafile
    I installed this and was very happy with it except that the files are stored in a database format rather than just allowing easy access.
  • SparkleShare
    I was keen on this solution but I could not see an Android client on Google Play so did not install it.
  • BitTorrent Sync
    This is an awesome file sync tool but again, not quite what I was looking for.
  • SpiderOak
    This solution is not open source so I didn't really look into it.
  • Unison
    I didn't look into this solution at all.
  • AjaXplorer
    This is a brilliant browser and mobile file access solution with a desktop client on the way.
 After evaluating some of the above private cloud solutions I settled on AjaXplorer.   Here is my previous blog post about installing it on a Raspberry Pi.   It gives me access to my files from my mobile with a nice client interface.   I can access my files with a browser if need be.   And I will be able to sync files to my desktop once the desktop client is released.

So I now have a headless Raspberry Pi installed in my house running off a phone charger using maybe $15 worth of electricity per year.   Plugged into that is a Western Digital 4TB USB hard disk.   The Raspberry Pi has three primary services running on it being Samba for local file access, AjaXplorer for private cloud file access and Deluge for the odd Bit Torrent file I need to download.

I am still a user and fan of Dropbox.   I use the Camera Upload feature of the Android Dropbox client.   It allows me to share files with other users of Dropbox.   It also lets me share files to non Dropbox users through the public links saving my home upload bandwidth.   But I realized through this whole process that I didn't need a cloud storage solution, I already had one.   All I needed was easy access to it.

I will continue to be a big fan of Evernote.   It is perfect for information access on your mobile devices and desktops.

If you know of any other private cloud storage solutions I have not looked into, please comment below.

[Update - 2013-07-18] -  I have added to my Private Cloud Storage solution and blogged about it here.


Tuesday, 28 May 2013

Installing AjaXplorer with Nginx on Debian

Over the past few months I have been looking into private cloud storage.   I have been using Dropbox for a long time now but am unhappy with the use of a third party for file storage.   That said, Dropbox is an awesome product and I will still use it for some functions in the future.   After much research I think I may have found a solution for private cloud storage that is best suited to my needs.

I will write another post about the different solutions I have found and tested but for now I am installing AjaXplorer.   This product is open source, installable on your own hardware and gives you access to your already existing file server with minor configuration.

Following are the instructions for installing AjaXplorer with Nginx on Debian Wheezy.   The hardware I am using is a Raspberry Pi with a 4TB Western Digital USB hard drive attached.   Because the Raspberry Pi is a low powered device I am using Nginx as the web server with PHP-FPM for processing php.

Assumptions
  • You already have Debian 7.0 (Wheezy) running.
  • You can download the AjaXplorer compressed file to your Debian server.

To start off with we need to install the prerequisite packages.   I am keen to keep my Raspberry Pi lean and so I did some testing to determine the minimum required packages needed to get the full functionality of AjaXplorer.   Note I am not including the requirements for the AjaXplorer desktop client yet because it is in beta and I am not interested in testing it.   If you wish to use the desktop client you will need some rsync php related packages.   So here is the command to install the prerequisites;

apt-get install nginx php5 php5-fpm php5-gd php5-cli php5-mcrypt

Once the prerequisites are installed, create the www directory and set ownership;

mkdir /var/www
chown www-data:www-data /var/www

We need to configure php to support larger file uploads so edit the php.ini file;

vim /etc/php5/fpm/php.ini

Edit the following values to your liking;

file_uploads = On
post_max_size = 20G
upload_max_filesize = 20G
max_file_uploads = 20000

Now we need to configure Nginx to setup our AjaXplorer web site (use your own domain name below);

vim /etc/nginx/sites-available/x.yourdomain.com

Here is the x.yourdomain.com config file I am using. Make sure you change the max_body_size value and replace x.yourdomain.com with your servers DNS name;

server {                                                                                 
  listen 80;
  server_name x.yourdomain.com;
  root /var/www;
  index index.php;
  client_max_body_size 20G;
  access_log /var/log/nginx/x.yourdomain.com.access.log;
  error_log /var/log/nginx/x.yourdomain.com.error.log;

  location / {
  }

  location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
    expires max;
    add_header Pragma public;
    add_header Cache-Control "public, must-revalidate, proxy-revalidate";
  }

  include drop.conf;
  include php.conf;
}

Take note of the two include statements at the bottom of the Nginx site file.   You will need to make these files also;

vim /etc/nginx/drop.conf

And here is my drop.conf contents;

location ^~ /conf/       { deny all; }
location ^~ /data/       { deny all; }
location = /robots.txt  { access_log off; log_not_found off; }
location = /favicon.ico { access_log off; log_not_found off; }
location ~ /\.          { access_log off; log_not_found off; deny all; }
location ~ ~$           { access_log off; log_not_found off; deny all; }

Note the first two deny all statements above are specific to AjaXplorer.
Now create the php.conf file;

vim /etc/nginx/php.conf

Here is my php.conf contents;

location ~ \.php {                                                                       
  try_files $uri =404;
  fastcgi_param  QUERY_STRING       $query_string;
  fastcgi_param  REQUEST_METHOD     $request_method;
  fastcgi_param  CONTENT_TYPE       $content_type;
  fastcgi_param  CONTENT_LENGTH     $content_length;
  fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
  fastcgi_param  SCRIPT_FILENAME    $request_filename;
  fastcgi_param  REQUEST_URI        $request_uri;
  fastcgi_param  DOCUMENT_URI       $document_uri;
  fastcgi_param  DOCUMENT_ROOT      $document_root;
  fastcgi_param  SERVER_PROTOCOL    $server_protocol;
  fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
  fastcgi_param  SERVER_SOFTWARE    nginx;
  fastcgi_param  REMOTE_ADDR        $remote_addr;
  fastcgi_param  REMOTE_PORT        $remote_port;
  fastcgi_param  SERVER_ADDR        $server_addr;
  fastcgi_param  SERVER_PORT        $server_port;
  fastcgi_param  SERVER_NAME        $server_name;
  fastcgi_pass unix:/var/run/php5-fpm.sock;
}

The last statement in the above file is the mapping between Nginx and PHP5-FPM.

Now that all the Nginx files are created we can enable the site by deleting the default Nginx site and linking to the new site;

cd /etc/nginx/sites-enabled
rm default
ln -s ../sites-available/x.yourdomain.com

Time to get the AjaXplorer files. Download the latest version and save it to your /var/www directory. Extract the downloaded file to the root of the www directory;

cd /var/www
tar -xzf <gz file name here>
ls -l
mv <extracted directory name>/* /var/www
rm -R /var/www/<extracted directory name>
chown -R www-data:www-data /var/www

Prior to opening a browser and seeing the result we need to restart the required services to pick up the new config files;

service php5-fpm restart
service nginx restart

Now open a browser and hit your IP address or DNS name.   The first time you access AjaXplorer you will see a diagnostics page that will look like this.


You will need to fix any issues discovered by the Diagnostics program before continuing. You will notice the warning about SSL Encryption.   I am accessing my server using Pound as a reverse proxy to encrypt the pages using SSL.

Once you have fixed any errors reported by the Diagnostics page click on the link under the title to continue to the AjaXplorer main interface. You will need to log in with a username of admin and a password of admin for the first access.   Make sure you change the password at some point.

The last required configuration for the installation is to adjust the AjaXplorer upload file size limit. This is achieved under settings;


That's it.   AjaXplorer is now installed and waiting for you to configure repositories and other customizations.  There are plugins available and client applications.   I am using the Android client successfully and will look at the Desktop client once it is out of beta.

The Raspberry Pi is an amazing platform for free open tools like this and I am now using a low powered Pi with a USB disk as my home file server cutting my electricity bill and reducing my carbon foot print.



Monday, 29 April 2013

Remotely Connect to Hyper-V with Hyper-V Manager

On Windows 8 or Windows Server 2012 you can install the Hyper-V Management Console and use it to connect to a server or client running Hyper-V Services.

Microsoft has the basic configurations covered at this address;
http://technet.microsoft.com/en-us/library/jj647788.aspx

There is a rather important detail missing though. To connect to the Hyper-V Service running on a different machine, you need to run the Hyper-V Manager using a local account.

Now if you are like me you are thinking to yourself "I am using my local account!" Well you may not be. If you chose to use a Microsoft account when you installed Windows 8, then you are not using a local account.

So make sure you have a local administrative account on both the Hyper-V Server and the Remote Management machine.   Hold down the Shift key and right click the Hyper-V Manager link and select to "Run as different user" and use your local account.

Have a read through the community comments on the bottom of the Technet page for more details.


Wednesday, 24 April 2013

Export Hyper-V Virtual Machine to a File Share in a Workgroup

I have a number of Windows Server 2012 Hyper-V hosts that are not members of a Domain.
To export the Virtual Machines to a File Server we need to authenticate the System service account to the File Server before the Hyper-V Management Service can access the shares. I found this neat trick on Microsoft's Technet Forum which worked wonderfully using PSExec;


1. Open an elevated command prompt: Click start, type CMD, and press Ctrl+Shift+Enter, then accept the UAC dialog (if applicable).

2. Run:
 psexec.exe \\<localServerName> /s cmd.exe
 This will provide you with a new command prompt, but it will instead run under the SYSTEM context, instead of your user context.

3. Run:
 net use \\<remoteServerName>\<sharename> /user:<domain>\<userName> <password>
 This authenticates the SYSTEM account to the share you want to export to.  
 As an example, the line could look like: "net use \\server2\c$ /user:mydomain\Administrator MyAw3s0m3Pwd"

4. Return to the Hyper-V MMC, and try your export again.  It should work now.

5. When you're done, use:
 net use \\<remoteServerName>\<sharename> /delete
 This revokes the credentials from the SYSTEM account.  Alternatively, you could reboot.

6. Type exit, press enter.  You've dropped back to the original command prompt.  Type exit and press enter again, or simply close the window.



This is not a process you would want to perform for a nightly backup, but as a once off it does the trick.



Vim

I have been using Linux for a number of years now and right from the start have been using Vim.
It is an elite text editor with a number of inside jokes about it.   Love it or hate it, a skilled Vim operator will out perform any peers on other text editors.   I am not excluding Vi comparable text editors here.

That being said, I am not an expert and this blog post is a repository of commands I am learning and have learned.   I plan on using this page to remind myself of the rarely used commands.   This page will be updated often.

So hear are the commands I am learning or know so far;

%s/foo/bar/g     [Find and Replace foo with bar]
V then =         [Reformat Selection]
n      [After Search Find Next]
v      [Visual Mode]
V      [Visual Mode Line Select]
gg=G   [Re-indent Entire Document]
p      [Put or Paste Buffer]
y      [Yank or Copy Selected]
dd     [Delete the Current Line]
3dd    [Delete Three Lines]
u      [Undo]

Just for reference, here is my vimrc config file;

" Debian Statement
runtime! debian.vim

" Set Syntax and FileType
syntax on
filetype on
filetype indent on
filetype plugin on

" Set Omni completion filetype associations
set ofu=syntaxcomplete#Complete
autocmd FileType html :set omnifunc=htmlcomplete#CompleteTags
autocmd FileType ruby,eruby set omnifunc=rubycomplete#Complete


" Set Search Format
set incsearch
set ignorecase
set smartcase

" Set Tab Style
set expandtab
set shiftwidth=2
set tabstop=2
set softtabstop=2
set smartindent
set autoindent
set wildmode=longest:full
set wildmenu

" Set Visual Style
set cursorline
hi CursorLine term=none cterm=none ctermbg=3

" Key Maps - Insert Mode
imap ii <C-[>

" Set Environment Values
set nocompatible
set showcmd
set showmatch
set ignorecase
set smartcase
set incsearch
set autowrite
set hidden
set viminfo='100,<100,s10,h

" Enable Terminal Colour Support
if &term =~ "xterm"
set t_Co=256
if has("terminfo")
   let &t_Sf=nr2char(27).'[3%p1%dm'
   let &t_Sb=nr2char(27).'[4%p1%dm'
else
   let &t_Sf=nr2char(27).'[3%dm'
   let &t_Sb=nr2char(27).'[4%dm'
endif
endif



Thursday, 28 March 2013

SharePoint 2013 Site Closure

I am working on SharePoint Server 2013 and just learnt about Site Closure or Closing a Site. What a great addition to the site management. This new feature allows an administrator to define a Site Policy to close or delete sites on a schedule.

To configure Site Closure you first have to make a Site Policy in the top level site settings under Site Collection Administration.


In the Site Policies page you can click create to make a Site Policy. When you create a Site Policy you have these options available to you.

The first radio button selection, "Do not close or delete site automatically", is purely so you can have the Site Collection be read only when it is closed.
If you select the second radio button "Delete sites automatically" you get more options.

As you can see, the site can be deleted automatically on a schedule based on either the Site closed date, or the Site created date.
If you choose the last radio button, "Close and delete sites automatically", you get one more schedule option available to you.

Now the site can be closed on schedule and deleted on schedule.

Once one or more Site Policies have been created you apply the policies to the existing sites using the Site Closure and Deletion link on the Site Settings page under Site Administration.

On the Site Closure and Deletion page you can close the site based on the policy you select at the bottom of the page.   To delete the site manually you need to go back to Site Setting and click Delete This Site under Site Actions.


So as you can see, it is a simple feature but enables the SharePoint administrators far greater control over removal of sites in a typical user site creation scenario.

In addition to the above Site Policy settings, you can enable Self-Service Site Creation forcing users to select a Site Policy to apply to their site. Finally we have the ability to clean the unused sites out of SharePoint without a delete only option as in the past.



Installing Ruby on Rails using Debian Wheezy

So I am setting up a website for a community band and wanted to try out using Ruby on Rails. The installation of Ruby with Rails is not as easy as I would have thought. After some trial and error here is the smoothest way to get Rails up and running on Debian Wheezy RC.

Firstly get Debian fully updated and edit the apt.config and sources.list files to add support for unstable packages. We need this to install a javascript engine. See my last post for details of this step.

The next thing to do is install all the required packages to get Rugy working.
apt-get -y install ruby ruby-dev rubygems sqlite3 libsqlite3-dev git
Rails will need a javascript engine so install node.js from the unstable repository with this command. Note that once node.js is available in the Testing or Stable package repositories you will not need to tell apt to install from Unstable.
apt-get -y -t unstable install nodejs
Now we can install Rails using Ruby Gems.
gem install rails
[Edit]
As an option, you could install Rails using apt.
apt-get install ruby-rails-3.2
That's it. If you want to use other Ruby frameworks you can install them also. In this case I am going to give RefineryCMS a shot.



Tuesday, 26 March 2013

Installing Unstable Packages on Debian Testing

If you want to install unstable packages onto a Debain Testing installation you will need make the following changes to Apt.

Firstly edit or create the /etc/apt/apt.conf file and add this line;
vim /etc/apt/apt.conf
APT::Default-Release "testing";

Now we need to add the unstable repositories to the sources.list file.
vim /etc/apt/sources.list

Here is the sources.list file I am using at the time of writing.
# Testing Repository
deb http://ftp.au.debian.org/debian testing main contrib non-free
deb-src http://ftp.au.debian.org/debian testing main contrib non-free

deb http://ftp.debian.org/debian/ testing-updates main contrib non-free
deb-src http://ftp.debian.org/debian/ testing-updates main contrib non-free

deb http://security.debian.org/ testing/updates main contrib non-free
deb-src http://security.debian.org/ testing/updates main contrib non-free

# Unstable Repository
deb http://ftp.au.debian.org/debian unstable main contrib non-free
deb-src http://ftp.au.debian.org/debian unstable main contrib non-free

Lastly update the repositories.
apt-get update
Now if you want to install a package from the unstable distribution of Debian use the apt-get command with a couple of extra parameters   This example is installing nodejs from unstable.
apt-get -t unstable install nodejs


Thursday, 21 March 2013

Installing ownCloud v5.0 on Debian Wheezy using Nginx running on a Raspberry Pi

My last post was about installing ownCloud onto Debian Wheezy using Git. I didn't mention in that post that Debian was running as a Virtual Machine on Windows Server 2012 Hyper-V. This install is working well for me but I wanted to use a Raspberry Pi for the install of ownCloud so I could attach a USB hard disk for storage and use as little power as possible.

I decided to try installing ownCloud with Nginx as the web server because of articles around the web stating that Nginx uses less resources than Apache2.   This being the case it makes Nginx a great candidate for the web server when using a low powered Raspberry Pi as the hardware.

So here is the process I used to achieve my goal.

Firstly we need to get the Raspberry Pi setup with Debian, or as it is called in the Pi world, Raspbian. I didn't need a graphical environment for this setup, so I used a custom version of Raspbian called Raspbian Server Edition.  Raspbian SE was at v2.3 at the time of writing.

We can use the beginners guide to install the Raspberry Pi operating system to an SD card. Just use the Raspbian SE images instead of the full Raspbian image.

Once you have the Raspberry Pi installed and running including a static IP address, SSH server and other standard configurations, you then need to install the required packages;
apt-get -y install nginx php5 php5-fpm php5-cgi php5-gd php5-json php-pear php-xml-parser php5-intl php5-sqlite curl libcurl3 libcurl3-dev php5-curl smbclient cifs-utils mp3info zip git
Just because we have installed so many packages, I feel it is a good time to do a restart and to test the Nginx server is starting after boot. So type the command Reboot and wait for the system to come up.

You can now open a browser and hit the address of your Pi. You should get a warming message as below;


Nginx does not use the /var/www directory by default so lets make that directory and setup security on it;
mkdir /var/www
chmod 774 /var/www
chown www-data:www-data /var/www

Now lets configure the site file for Nginx. Create a new file in the sites-available directory changing the name as desired;
vim /etc/nginx/sites-available/oc.domain.com
Now paste in your web server configuration. Here is the site file I ended up with;

# This is the complete example of nginx configuration file for ownCloud 5
# This config file configures proper rewrite rules for the new release of ownCloud
# Also, this config file configures nginx to listen on both IPv4 and IPv6 addresses
# If you want it to listen to IPv4 address only, use listen 80; instead of listen [::]:80

# First, we configure redirection to HTTPS (substitue owncloud.example.com with the proper address of your OC instance)

server {
  listen 80;
  server_name owncloud.example.com;
  rewrite ^ https://$server_name$request_uri? permanent;
}

# Now comes the main configuration for ownCloud 5

server {
  listen 443 ssl; # Make it listen on port 443 for SSL, on both IPv4 and IPv6 interfaces
  server_name owncloud.example.com;

  root /var/www; # Make sure to insert proper path for your ownCloud root directory

  index index.php;

  # Now we configure SSL certificates. Make sure you enter correct path for your SSL cert files
  ssl_certificate /etc/ssl/localcerts/oc.pem;
  ssl_certificate_key /etc/ssl/localcerts/oc.key;

  client_max_body_size 2G; # This is the first parameter which configures max size of upload, more to come later
  fastcgi_buffers 64 4K;

  # Configure access & error logs
  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;

  # Configure proper error pages
  error_page 403 = /core/templates/403.php;
  error_page 404 = /core/templates/404.php;

  # Some rewrite rules, more to come later
  rewrite ^/caldav((/|$).*)$ /remote.php/caldav$1 last;
  rewrite ^/carddav((/|$).*)$ /remote.php/carddav$1 last;
  rewrite ^/webdav((/|$).*)$ /remote.php/webdav$1 last;

  # Protecting sensitive files from the evil outside world
  location ~ ^/(data|config|\.ht|db_structure.xml|README) {
    deny all;
  }

  # Configure the root location with proper rewrite rule
  location / {
    rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
    rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;
    rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
    rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;
    rewrite ^/apps/calendar/caldav.php /remote.php/caldav/ last;
    rewrite ^/apps/contacts/carddav.php /remote.php/carddav/ last;
    rewrite ^/apps/([^/]*)/(.*\.(css|php))$ /index.php?app=$1&getfile=$2 last;

    rewrite ^(/core/doc[^\/]+/)$ $1/index.html;

    index index.php; # This one might be redundant, but it doesn't hurt to leave it here

    try_files $uri $uri/ index.php;
  }

  # Configure PHP-FPM stuff
  location ~ ^(?<script_name>.+?\.php)(?<path_info>/.*)?$ {
    try_files $script_name = 404;
    fastcgi_pass unix:/var/run/php5-fpm.sock; # Be sure to check proper socket location for php-fpm, might be different on your system
    fastcgi_param PATH_INFO $path_info;
    fastcgi_param HTTPS on;

    # This one is a little bit tricky, you need to pass all parameters in a single line, separating them with newline (\n)
    fastcgi_param PHP_VALUE "upload_max_filesize = 2G \n post_max_size = 2G"; # This finishes the max upload size settings
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # On some systems OC will work without this setting, but it doesn't hurt to leave it here
    include fastcgi_params;
  }

  location ~* ^.+.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
    expires 30d;
    access_log off;
  }

}

Once you have saved the file you will need to make a symbolic link to it in the sites-enabled directory;
rm /etc/nginx/sites-enabled/default
ln -s /etc/nginx/sites-available/oc.domain.com /etc/nginx/sites-enabled/oc.domain.com

It's always nice to see some progress being made, so lets make a phpinfo() file and test the web server and php functionality;
echo "<?php phpinfo() ?>" > /var/www/index.php
service nginx restart

Now open a web browser and hit the IP address of your Raspberry Pi. We should see the standard PHP information being displayed.



As with my last post, we need to add the US UTF-8 locale.   I will keep this post short.   See the ownCloud installation process from my last post.

Here are the references I used for this install;
http://rasberrypibeginnersguide.tumblr.com/post/27283563130/nginx-php5-on-raspberry-pi-debian-wheezy
http://blog.martinfjordvald.com/2010/07/nginx-primer/
http://www.webhostingtalk.com/showthread.php?t=1025286


Wednesday, 27 February 2013

Installing ownCloud on Debian GNU/Linux using Git

I have been watching an open source project for a while now with great interest.   The name of the project is ownCloud and it is a file synchronising server with a web interface that operates much the same as Dropbox, SkyDrive, iCloud and other cloud based storage solutions.
  
Now that ownCloud is feature rich and has a client for Android, iOS, Windows and OS X, I wanted to install it to move my cloud storage away from Dropbox and onto my own server. By doing this I will have control of my data without the need to pay for space and more flexibility as to how I access my data.

I am a fan of the GNU/Linux distribution called Debian so I installed a virtual machine running Debian Wheezy release candidate to host ownCloud. You can install ownCloud on Windows Server if you wish. The stable version of ownCloud on their website is at v4.5 which has issues using external storage on a Windows shared folder, so I needed to install the latest beta version.

To get the latest version of ownCloud I needed to install using the Git repository from GitHub. At the time of writing this article the ownCloud version on GitHub is at v5.0 beta 2.

To start off with I needed to install the prerequisites on Debian with this command;
apt-get -y install apache2 php5 php5-gd php5-json php-pear php-xml-parser php5-intl php5-sqlite curl libcurl3 libcurl3-dev php5-curl smbclient cifs-utils mp3info zip git

At this point, I changed the php.ini file to increase the upload_max_filesize value from the default 2M to 1024M;
vim /etc/php5/apache2/php.ini

I decided to install ownCloud into the root of the apache2 www folder;
cd /var/www

I removed the original apache2 index file;
rm index.html

To get the latest ownCloud version I needed to use Git to clone the core, 3rdparty and apps repositories into the root of the web server.   Note that the apps clone needs to be cloned into a new target apps2 directory to prevent conflicts with the core repositories apps folder. I will need to edit the config.php file later to include the apps2 folder, but it does not exist yet;
git clone https://github.com/owncloud/core ./
git clone https://github.com/owncloud/3rdparty
git clone https://github.com/owncloud/apps apps2

Now I need to give ownership of the ownCloud directories to www-data with;
chown -R www-data:www-data ./*
chown -R www-data:www-data /var/www

To improve the security of ownCloud I needed to enable .htaccess in the virtual host file;
vim /etc/apache2/sites-enabled/000-default

Now change the /var/www directory element so AllowOverride is set to 'All' as in this example;
<directory var="" www="">
  Options Indexes FollowSymLinks MultiViews
  AllowOverride All
  Order allow,deny
  allow from all
</directory>


I found when I first tested this install approach the Admin page reported a locale issue so I needed to install the en_US.UTF-8 locale for the system to work correctly with file names. My default locale is en_AU.UTF8. To install the US UTF8 locale run this command;
dpkg-reconfigure locales

Then select the US UTF-8 locale.
Check the installed locales with;
locale -a

Lastly enable the following modules and restart apache2;
a2enmod rewrite
a2enmod headers
service apache2 restart

I can now open a web browser and access my ownCloud instance (http://ServersIPAddress/) but there is an expected error.   When I first connect to ownCloud like this it will create a config.php file.


Now that I have an ownCloud config file I need to edit it to support the apps2 directory. If you read on the apps repository page here https://github.com/owncloud/apps (near the bottom) it states the need for this change. When you edit the config.php file using the instructions from the link above it states to leave the apps and apps2 directory as writable = false. This failed on my install and I needed to change writable to true on the apps2 directory for user downloaded apps;
vim /var/www/config/config.php

Here is my complete config.php file;
<?php
$CONFIG = array (
  'instanceid' => '512efd12eb6d8',
  'passwordsalt' => '5555c4f3a4fbeb1a527d376095555',
  'datadirectory' => '/var/www/data',
  'dbtype' => 'sqlite3',
  'version' => '4.94.10',
  'installed' => true,
  'apps_paths' =>
  array (
    0 =>
    array (
      'path' => '/var/www/apps',
      'url' => '/apps',
      'writable' => false,
    ),
    1 =>
    array (
      'path' => '/var/www/apps2',
      'url' => '/apps2',
      'writable' => true,
    ),
  ),
);


I can now connect to my new ownCloud server and run through the setup wizard to create the SQLite database and admin user.

I now have a working cloud storage solution!

To update to the latest Git version I will need to run these commands;
cd /var/www
git pull
cd /var/www/3rdparty
git pull
cd /var/www/apps2
git pull

There is more to do such as enabling SSL and mapping to my external shared folder running on Windows but that is another story.

Here are the references I used to complete this task;
https://github.com/owncloud   (ownCloud github site)
http://doc.owncloud.org/server/5.0/admin_manual/installation.html   (v5.0 installation documents)
http://forum.owncloud.org/viewtopic.php?f=17&t=8012   (fixing the US UTF-8 issue)

To use port SSL use this reference to enable it on Apache2 with Debian;
http://wiki.debian.org/Self-Signed_Certificate




Monday, 18 February 2013

Disable Adobe Reader Updates

If you need to disable Adobe Reader from automatically updating or asking to be updated on Windows you can do so by creating a REG_DWORD registry key named bUpdater.

Here is the location to create the key;
HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Adobe\Acrobat Reader\10.0\FeatureLockDown

Notice the \10.0\ in the above address.   Replace this with the version you are trying to configure.
Inside the above address create this key;
REG_DWORD   bUpdater   0x00000000

Setting this value will disable the updater and remove the "Check for updates" option in the help menu.

Here is the reference from Adobe's website.

I used the following PowerShell script to search for all FeatureLockDown keys and disable the Updater regardless of the version of Adobe Reader that is installed.

"Disabling Adobe Acrobat Reader Updates."
$featureLockDownKeys = Get-ChildItem -Path HKLM:/SOFTWARE/Policies/Adobe -Recurse | Where-Object -Property Name -Match "FeatureLockDown$"

foreach ($fldk in $featureLockDownKeys)
{
New-ItemProperty -Path $fldk.PSPath -Name "bUpdater" -Value 0 -PropertyType DWORD -ErrorAction SilentlyContinue | Out-Null
New-ItemProperty -Path $fldk.PSPath -Name "iDisablePromptForUpgrade" -Value 0 -PropertyType DWORD -ErrorAction SilentlyContinue | Out-Null
}