Root and install google play in Amazon Kindle Fire HD 10 (8.1.4)

This is a working procedure to root and install google play Amazon Kindle HD 10″ (8.1.4).

Root Kindle Fire HD,

I followed the procedure described in here http://forum.xda-developers.com/showthread.php?t=1886460

$ ./RunMe.sh

Select the option 1) Normal

  • Now your device is root’d, you may verify :

sh-3.2$ ./adb shell
shell@android:/ $ su
shell@android:/ # cd /
shell@android:/ # ls

  • Reboot  and do the installation for google play store

Install google play store 

Reference: http://forum.xda-developers.com/showthread.php?t=1893410

  •  Open ES File Explorer, go to settings, Root settings , and select Root Explorer, Upto Root, and Mount File system.
  • Download the GoogleServicesFramework.apk Vending.apk and Play.apk and copy to your sdcard.

GoogleServicesFramework.apk – mediafire.com/?zaumfwhraxcifqf
Vending.apk – mediafire.com/?31bn3e258jjpj8d
Play.apk – mediafire.com/?wwcqrlfwt8o1gnv

Please follow the below steps in order to get it working.

– Open ES file explorer, click and install GoogleServicesFramework.apk
– Then move the Vending.apk to /system/app
– Change the permission to 644 (User – Read/Write, Group/Others – Read)
– Now click and install Vending.apk
– You can see android market installed on your kindle, open it and do the google account registration. It is important that you do this step before installing Play.apk
– Once the registration is successful, click and install Play.apk from the sdcard.
– Now you will have a working play store , enjoy:)

Issues faced:

  • While rooting,  I was not able to execute the adb under stuff folder, replace the adb with the one come with Android SDK.
  • I got “google play has stopped” messages while opening google play . To fix it . 1) Make sure that you copied the Vending.apk to the correct path /system/app and the permissions are correct. 2) Do the google account registration before installing Play.apk
Rooted Amazon Kindle Fire HD 10  (8.1.4) with Google apps
Rooted Amazon Kindle Fire HD 10 (8.1.4) with Google apps

./arun

Upgrade, Restore Drupal 7

Shell script to upgrade and restore Drupal 7 website
This script will take care of the necessary actions required for upgrading drupal to higher versions.

USAGE

  • Copy the script to your webserver.
  • Edit the script and change the variables to match with your setup
  • Give execute privilege to the owner of the script (chmod u+x upgrade-restore-drupal7.sh)
  • Execute the script ./upgrade-restore-drupal7.sh

UPGRADE

$ ./upgrade-restore-drupal7.sh 
 Please enter your choice:
 1. Update drupal
 2. Restore an old installation from backup
 3. Exit
1
Please enter the new drupal version (eg: 7.15) : 
7.18
Downloading drupal-7.18
Downloaded the the drupal version drupal-7.18
Current site backup is created: /home/foo/backups/08-01-2013-0938
Database backup created: /home/foo/backups/08-01-2013-0938.sql
Site is in maintanence mode now
Removed all drupal core files from destination
Copied the new version contents
Drupal updated to drupal-7.18
Site is active again, but please update your database, please visit http://<yourwebsite>/update.php to finalize the process
Removed the source files

RESTORE

$ ./upgrade-restore-drupal7.sh 
 Please enter your choice:
 1. Update drupal
 2. Restore an old installation from backup
 3. Exit
2
List of available backups
08-01-2013-0753
08-01-2013-0758
08-01-2013-0804
08-01-2013-0841
08-01-2013-0849
08-01-2013-0858
08-01-2013-0900
08-01-2013-0904
08-01-2013-0905
08-01-2013-0938
Please enter the backup file name to restore: (eg: 08-01-2013-0753): 
08-01-2013-0905
Site is offline now
Removed production files
Restored the filesystem backup 
Restored the database
Site is restored

View on github

Upgrading Linux Mint 13 (maya) to Linux Mint 14 (nadia).

Linux Mint 14
Linux Mint 14

Take a backup of the current sources.list, preferably make a full backup of the system.

Edit the sources.list file,
replace the occurrences of maya with nadia and precise with quantal.

$ vi /etc/sources.list
:%s/maya/nadia/g
:%s/precise/quantal/g

Resulting file may look like.

deb http://packages.linuxmint.com/ nadia main upstream import
deb http://archive.ubuntu.com/ubuntu/ quantal main restricted universe multiverse
deb http://archive.ubuntu.com/ubuntu/ quantal-updates main restricted universe multiverse
deb http://security.ubuntu.com/ubuntu/ quantal-security main restricted universe multiverse
deb http://archive.canonical.com/ubuntu/ quantal partner
deb http://packages.medibuntu.org/ quantal free non-free

Update the system

$ sudo apt-get update
$ sudo apt-get dist-upgrade

SAN and Tape backup with bacula

Install and configure bacula for SAN and Tape backup

There is already an excellent document about bacula installation and configurations at bacula website. This article is one way of getting SAN and Tape backup working together with single bacula director installation. It assumes that you already have installed and mounted the SAN and configured the tape device.

This configuration aim at:

  • Incremental daily for 20 days
  • Differential weekly for 3 months
  • Monthly full for 6 months
  • Eject the tape to mailslot after the back and notify admin etc.

customise it based on your requirements.

The configurations are tested with HP MSL 2024 Tape library and MSA SAN array.

Bacula server setup

The configurare are done on Redhat Enterprise linux, likely similar for other Linux distros.

  • Create a user for backup
# useradd -d /home/backup backup
  • Install bacula server and create the database and database users : ref: http://www.bacula.org/5.2.x-manuals/en/main/main/Installing_Bacula.html for installation instructions.
  • Create the necessary directories:
# su - backup
$ mkdir -p /home/backup/bacula/var/lock/subsys
$ mkdir /home/backup/bacula/var/run/
  • Configure the director (bacula-dir.conf)

$ cat ~/bacula-dir.conf

# Define the director, common for SAN and Tape
Director { # define myself
Name = {hostname}-dir # use your hostname
DIRport = 9101 # where we listen for UA connections
QueryFile = "/home/backup/bacula/script/query.sql"
WorkingDirectory = "/home/backup/bacula/wdir"
PidDirectory = "/home/backup/bacula/var/run"
Maximum Concurrent Jobs = 3
Password = "{console_password} # Console password
Messages = Daemon
}
# List of files to be backed up to SAN
FileSet {
 Name = "File Set"
 Include {
 Options {
 signature = MD5
 }
 File = /
 }

 Exclude {
 File = /proc
 File = /tmp
 File = /.journal
 File = /.fsck
 }
}
# List of files to be backed up to tape
FileSet {
 Name = "tape Set"
 Include {
 Options {
 signature = MD5
 }
 File = /
 }

 Exclude {
 File = /proc
 File = /tmp
 File = /.journal
 File = /.fsck
 }
}
# Schedule for SAN backup
Schedule {
 Name = "WeeklyCycle"
 Run = Full 1st sun at 01:00
 Run = Differential 2nd-5th sun at 01:00
 Run = Incremental mon-sat at 01:00
}
# Schedule for tape backup
Schedule {
 Name = "TapeWeeklyFull"
 Run = Level=Full 1st sun at 03:00
}
# Definition of file storage (SAN)
Storage {
 Name = File
# Do not use "localhost" here
 Address = {FQDN} # N.B. Use a fully qualified name here
 SDPort = 9103
 Password = "{sdpassword}"
 Device = FileStorage
 Media Type = File
}
# Define storage (Tape)
Storage {
 Name = msl2024
 Address = {director-address}
 SDPort = 9103
 Password = "{director-password}"
 Device = MSL2024
 Media Type = LTO-4
 Autochanger = yes
 Maximum Concurrent Jobs = 3
}
# Generic catalog service
Catalog {
 Name = MyCatalog
 dbname = "dbname"; dbuser = "dbuser"; dbpassword = "dbpass"
}
# Tape catalog
Job {
 Name = "TapeBackupCatalog"
 JobDefs = "{dir-host-name}-tape"
 Level = Full
 FileSet="Catalog"
 Schedule = "CatalogAfterTapeBackup"
 RunBeforeJob = "/home/backup/bacula/script/make_catalog_backup.pl MyCatalog"
 RunAfterJob = "/home/backup/bacula/script/delete_catalog_backup"
 Write Bootstrap = "/home/backup/bacula/wdir/%n.bsr"
 Priority = 20 # run after main backup
}
# Default pool definition
Pool { 
 Name = Default
 Pool Type = Backup 
 Recycle = yes # Bacula can automatically recycle Volumes
 AutoPrune = yes # Prune expired volumes
 Volume Retention = 365 days # one year
}
# General Tape backup pool
Pool { 
 Name = TapePool
 Pool Type = Backup 
 Recycle = yes # Bacula can automatically recycle Volumes
 AutoPrune = yes # Prune expired volumes
 Volume Retention = 6 months # 6 months
 Recycle Oldest Volume = yes
 Storage = msl2024 
 Volume Use Duration = 4 days
}
## Do the following configurations for each client
# Job definition, define it for each bacula client, replace clientX_hostname, Fileset accordingly
# SAN
JobDefs {
Name = "{clientX_hostname}"
Type = Backup
Client = {clientX_hostname}-fd
FileSet = "File Set"
Schedule = "WeeklyCycle"
Storage = File
Messages = Standard
Pool = File
Full Backup Pool = Full-Pool-{clientX_hostname}
Incremental Backup Pool = Inc-Pool-{clientX_hostname}
Differential Backup Pool = Diff-Pool-{clientX_hostname}
Priority = 10
Write Bootstrap = "/home/backup/bacula/wdir/%c.bsr"
}
# Tape

JobDefs {
 Name = "{clientX_hostname}-tape"
 Type = Backup
 Client = {clientX_hostname}-tape-fd
 FileSet = "tape set"
 Schedule = "TapeWeeklyFull"
 Storage = msl2024
 Messages = Standard
 Pool = TapePool
 Full Backup Pool = TapePool
 Priority = 10
 Write Bootstrap = "/home/backup/bacula/wdir/%c.bsr"
}
# Define Job, replace clientX_hostname
# SAN
Job {
 Name = "{clientX_hostname}"
 JobDefs = "{clientX_hostname}"
}
# Tape
Job {
 Name = "{clientX_hostname}"
 JobDefs = "{clientX_hostname}-tape"
}

# Define restore job
# SAN
Job {
 Name = "RestoreFiles-{clientX_hostname}"
 Type = Restore
 Client={clientX_hostname}-fd
 FileSet="File Set" 
 Storage = File
 Pool = Default
 Messages = Standard
 Where = /home/backup/archive/bacula-restores
}
# Tape
Job {
 Name = "RestoreFiles-{clientX_hostname}-tape"
 Type = Restore
 Client={clientX_hostname}-tape-fd
 FileSet= "tape set"
 Storage = msl2024
 Pool = TapePool
 Messages = Standard
 Where = /home/backup/archive/bacula-restores
}

# Client (File Services) to backup
# SAN
Client { 
 Name = {clientX_hostname}-fd
 Address = {client_address}
 FDPort = 9102
 Catalog = MyCatalog
 Password = "{client_password}" # password for FileDaemon
 File Retention = 60 days # 60 days
 Job Retention = 6 months # six months
 AutoPrune = yes # Prune expired Jobs/Files
}
# Tape
Client {
 Name = {clientX_hostname}-tape-fd
 Address = {client_address}
 FDPort = 9202 # use different port
 Catalog = MyCatalog
 Password = "{client_password}" # password for FileDaemon
 File Retention = 6 months
 Job Retention = 6 months
 AutoPrune = yes
}
# Pool for each client
# SAN
Pool {
 Name = Full-Pool-{clientX_hostname}
 Pool Type = Backup
 Recycle = yes
 AutoPrune = yes
 Volume Retention = 6 months
 Maximum Volume Jobs = 1
 Label Format = Full-Pool-{clientX_hostname}-
 Maximum Volumes = 9
}
Pool { 
 Name = Inc-Pool-{clientX_hostname}
 Pool Type = Backup 
 Recycle = yes # automatically recycle Volumes
 AutoPrune = yes # Prune expired volumes
 Volume Retention = 20 days
 Maximum Volume Jobs = 6
 Label Format = Inc-Pool-{clientX_hostname}-
 Maximum Volumes = 7
}
Pool { 
 Name = Diff-Pool-{clientX_hostname}
 Pool Type = Backup
 Recycle = yes
 AutoPrune = yes
 Volume Retention = 40 days
 Maximum Volume Jobs = 1
 Label Format = Diff-Pool-{clientX_hostname}-
 Maximum Volumes = 10
}
# Tape, no extra definition required.

  • Make sure you label the tape and add it to the TapePool, if you tape drive has barcode device available, use
$ bconsole
* label barcode
then select the TapePool

If you have mailslot enabled you could configure the bacula to eject the tape to mailslot after backup finished and will notify.

$ cat /home/backup/bacula/script/delete_catalog_backup
# Unload the tape for storage
mtx -f /dev/sg1 unload 24 # replace 24 with your mailslot
# Send mail
/home/backup/bacula/script/mail.sh | mail -s "Tape backup done" admin@example.com

Configure storage daemon

Storage 
{ # definition of myself
 Name = {director_hostanme}-sd
 SDPort = 9103 # Director's port 
 WorkingDirectory = "/home/backup/bacula/wdir"
 Pid Directory = "/home/backup/bacula/var/run"
 Maximum Concurrent Jobs = 20
}
#
# List Directors who are permitted to contact Storage daemon
#
Director {
 Name = {director_hostname}-dir
 Password = "{director_password}"
}
# SAN
Device {
 Name = FileStorage
 Media Type = File
 Archive Device = /media/san/bacula/ # SAN volume
 LabelMedia = yes # lets Bacula label unlabeled media
 Random Access = yes
 AutomaticMount = yes # when device opened, read it
 RemovableMedia = no
 AlwaysOpen = no
}
# Tape
Autochanger {
 Name = MSL2024
 Device = lto4drive
 Changer Command = "/home/backup/bacula/script/mtx-changer %c %o %S %a %d"
 Changer Device = /dev/sg1 # change it based on your setup
}
Device {
 Name = lto4drive
 Drive Index = 0
 Media Type = LTO-4
 Archive Device = /dev/nst0
 AutomaticMount = no # when device opened, read it
 AlwaysOpen = no
 RemovableMedia = yes
 RandomAccess = no
 AutoChanger = yes
}

Client configuration

  • Install the bacula package on the client machines, except use –enable-client-only
  • Remove the director and storage daemon startup scripts
rm /etc/init.d/bacula-dir
rm /etc/init.d/bacula-sd
  • Create necessary directories
mkdir -p /home/backup/bacula/wdir /home/backup/bacula/var/run  /home/backup/bacula/var/lock/subsys/
  • Create bacula-filedeamon configuration for tape and san seperately

SAN (bacula-fd.conf)

FileDaemon { # this is me
 Name = {clientX_hostname}-fd
 FDport = 9102 # where we listen for the director
 WorkingDirectory = /home/backup/bacula/wdir
 Pid Directory = /home/backup/bacula/var/run
 Maximum Concurrent Jobs = 20
 }

Tape  (bacula-fd-tape.conf)

FileDaemon { # this is me
 Name = {clientX_hostname}-tape-fd
 FDport = 9102 # different port than the san
 WorkingDirectory = /home/backup/bacula/wdir
 Pid Directory = /home/backup/bacula/var/run
 Maximum Concurrent Jobs = 20
 }
  • Edit the bacula-fd startup script and add the extra line to start the tape file daemon
daemon /home/backup/bacula/sbin/bacula-fd $2 ${FD_OPTIONS} -c /home/backup/bacula/etc/bacula-fd-tape.conf

./arun

Host group based access restriction – Nagios

This is useful especially when you have different host groups belongs to different entities and you need to have access separation.

The basic idea is to use the same login user name in the contact groups. I assume that you have Apache htaccess authentication or LDAP authentication in place.

You may create new contact group of use the already existing one , just make sure your username and contact_name matches.

- Create a contact group
define contactgroup {
 contactgroup_name customer1
 alias Customer1 Servers
 members customer1
}
- Create the contact
define contact {
 contact_name customer1 #make sure this matches with the username
 alias Customer1 Contact
 service_notification_period 24x7
 host_notifications_enabled 0
 host_notification_period 24x7
 service_notification_options w,u,c,r
 host_notification_options d,u,r
 service_notification_commands notify-by-email
 host_notification_commands host-notify-by-email
 email customer1@example.com
}
- Use this contact group in host definition
define host {
 use generic-alerted-host
 host_name customer1-host
 address 8.8.8.8
 contact_groups customer1 # make sure this matches with the contactgroup_name
 max_check_attempts 3
}

Just restart nagios and try to login with the new user account. You may give more privileges to this user if required from cgi.cfg

./run

Detected bug in an extension! Hook FCKeditor_MediaWiki

Detected bug in an extension! Hook FCKeditor_MediaWiki::onCustomEditor failed to return a value; should return true to continue hook processing or false to abort.

Backtrace:

#0 mediawiki/includes/Wiki.php(497): wfRunHooks('CustomEditor', Array)
 #1 mediawiki/includes/Wiki.php(63): MediaWiki->performAction(Object(OutputPage), Object(Article), 
Object(Title), Object(User), Object(WebRequest))
 #2 mediawiki/index.php(114): MediaWiki->initialize(Object(Title), Object(Article), Object(OutputPage), 
Object(User), Object(WebRequest))
 #3 {main}

Edit the following file to fix this issue:

"FCKeditor/FCKeditor.body.php"
 -- public function onCustomEditor(&$article, &$user) {
 ++ public function onCustomEditor($article, $user) {

reference: http://dev.ckeditor.com/ticket/3530
./arun

svn: Can’t convert string from ‘UTF-8’ to native encoding:

"svn: Can't convert string from 'UTF-8' to native encoding:"

This usually happens with special characters in the file name, which the client cannot understand.

Just set proper locale in the client to fix this issues,

$ export LC_CTYPE=en_US.UTF-8
// make sure the locale is properly set.
$ locale
LC_CTYPE=en_US.UTF-8

./arun

Fix categories and tags in wordpress custom post_type

By default word press does not look in to custom post_types for categories and tags, even though the category names are visible you get a NOT FOUND page when you click on the category.

A work around found for this issue is :

Edit : functions.php

add_filter('pre_get_posts', 'query_post_type');
function query_post_type($query) {
if(is_category() || is_tag()) {
$post_type = get_query_var('post_type');
if($post_type)
$post_type = $post_type;
else
$post_type = array('post','custom_post_type_name','nav_menu_item'); // replace custom_post_type_name with your post_type, and keep nav_menu_item to display menu in category page.
$query->set('post_type',$post_type);
return $query;
}
}

Reference: http://wordpress.org/support/topic/custom-post-type-tagscategories-archive-page

Thanks to paranoid  for guiding to the fix . ;)

./arun

 

Replace broken hard drive in software RAID1

This scenario assumes that you have two hard disk with RAID1 setup and one of them is broken (say sdb).

To check the status of RAID:

$ cat /proc/mdstat

Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda3[1]
730202368 blocks [2/1] [U_]
md1 : active raid1 sda2[1]
264960 blocks [2/1] [U_]
md0 : active (auto-read-only) raid1 sda1[1]
2102464 blocks [2/1] [U_]

you will see [_U] or [U_] if there is a broken RAID.

If required remove the broken hardrive from RAID from all md devices.

# mdadm --manage /dev/md0 --fail /dev/sdb1

# mdadm --manage /dev/md1 --fail /dev/sdb2

# mdadm --manage /dev/md2 --fail /dev/sdb3

Shutdown the machine and replace the hard drive.

Once the server is booted, you will see the new device (either sda or sdb depends on what drive is broken)

# ls -l /dev/sd*

Now we need to replicate the partition schema on the new drive.

sfdisk -d /dev/sda | sfdisk /dev/sdb

// -d     Dump the partitions of a device

We can add the partition to the RAID now, you could verify the partitions with fdisk -l.

# mdadm --manage /dev/md0 --add /dev/sdb1

# mdadm --manage /dev/md1 --add /dev/sdb2

# mdadm --manage /dev/md2 --add /dev/sdb3

It will start sync the data and will be ready once completed.

You may verify the mdstat

# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid1 sda3[0] sdb3[1]
7302023 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
2649 blocks [2/2] [UU]

md0 : active (auto-read-only) raid1 sda1[0] sdb1[1]
21024 blocks [2/2] [UU]

./arun