Monday, December 4, 2017

Oracle - Reclaim disk space

Oracle does not release disk space even if you had delete the data or tablespace. If you have enterprise manager, you should use it to reclaim waste space.

You could do it manually but it will be troublesome. Below is a very simple steps to reclaim disk space used by Oracle if

1. You had added a continuous chunk of data

2. You had deleted the same continuous chunk of data added in point 1 and you had not added any data in between.

The reason being is you cannot shink and release space if free space are in between data.

If you are sure that you had met the above condition, do the following with sqlplus

1. Set column format so that the print is nice

COLUMN name FORMAT A50
2. Find the datafile and it used space


SELECT name, bytes/1024/1024 AS size_mb FROM v$datafile;

NAME                                                  SIZE_MB
-------------------------------------------------- ----------
/u01/app/oracle/oradata/users.dbf        10000

3. Shink the data file.

ALTER DATABASE DATAFILE '/u01/app/oracle/oradata/XE/users.dbf' RESIZE 1000M;

The final command is the trick and is pretty safe. If you try to shink a size which contain used data, it will throw an error

ORA-03297: file contains used data beyond requested RESIZE value


Reference:

1. https://oracle-base.com/articles/misc/reclaiming-unused-space#manual_tablespace_reorganization

Monday, October 30, 2017

Sed - Example guide

sed is a very useful stream editor to perform search and replace. Below are some useful tip

1. Usage

sed 's/apple/orange/' file

The above will look for first occurrence of apple and replace with orange in the file and send the output to stdout

2. Make changes to the file

sed -i 's/apple/orange/' file

add -i option will make changes to the file itself

3. All occurrence changes

sed -i 's/apple/orange/g' file

adding g mean global replacement

4. Escape single quote

sed -i 's/'\''apple'\''/orange/' file

You can use '\'' to escape a single quote.

Monday, September 11, 2017

Facebook - Control privacy setting of liked page

By default, Facebook displayed your liked page to public. It is the user responsibility to adjust the privacy setting. And in my opinion, Facebook hide it well.

To adjust the privacy setting of liked page, do the following

1. Go to your profile page

2. At the top of the profile page, click on More


3.  Then, select Likes


4. At the Likes page, you will see a pencil button. Click on it and select Edit Privacy.


5. At the Edit Privacy page, edit the privacy of Likes to either Public, Friends, Only Me or Custom.

Thursday, August 10, 2017

Hadoop - How to setup a Hadoop Cluster

Below is a step-by-step guide which I had used to setup a Hadoop Cluster

Scenario


3 VMs involved:

1) NameNode, ResourceManager - Host name: NameNode.net
2) DataNode 1 - Host name: DataNode1.net
3) DataNode 2 - Host name: DataNode2.net


Pre-requisite 


1) You could create a new Hadoop user or use an existing user. But make sure the user have access to the Hadoop installation in ALL nodes

2) Install JAVA. Refer here for a good version. In this guide, Java is installed at /usr/java/latest

3) Download a stable version of Hadoop from Apache Mirrors

This guide is based on Hadoop 2.7.1 and assume that we had create a user call hadoop


Setup Passphaseless SSH from NameNode to all Nodes.


1) Run the command

ssh-keygen

This command will ask you a set of questions and accepting the default is fine. Eventually, it will create a set of private key (id_rsa) and public key (id_rsa.pub) at the user directory (/home/hadoop/.ssh)

2) Copy the public key to all Nodes with the following

ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub NameNode.net
ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub DataNode1.net
ssh-copy-id -i /home/hadoop/.ssh/id_rsa.pub DataNode2.net

3) Test the passphaseless SSH connection from NameNode with

ssh (hostname)


Install Hadoop in all Node


1) With the downloaded Hadoop distribution. Unzip it to a location where the Hadoop user had access

For this guide, I had create a /usr/local/hadoop and un-tar the distribution at this folder. The full path of Hadoop installation is /usr/local/hadoop/hadoop-2.7.1


Setup Environment Variables


1) It is best that Hadoop Variables are exported to the environment when user log in. To do so, run the command at the NameNode

sudo vi /etc/profile.d/hadoop.sh

2) Add the following in /etc/profile.d/hadoop.sh

export JAVA_HOME=/usr/java/latest
export HADOOP_HOME=/usr/hadoop
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH
3) Source this file or re-login to setup the environment.

4) (OPTIONAL) Set up the above for all Nodes.


Setup NameNode & ResourceManager


1) Make a directory to hold NameNode data

mkdir /usr/local/hadoop/hdfs_namenode

2) Setup $HADOOP_HOME/etc/hadoop/hdfs-site.xml




Note: dfs.datanode.data.dir value must be a URI

3) Setup $HADOOP_HOME/etc/hadoop/core-site.xml





4) (OPTIONAL) Setup $HADOOP_HOME/etc/hadoop/mapred-site.xml (We are using NameNode as ResourceManager)



5) (OPTIONAL) Setup $HADOOP_HOME/etc/hadoop/yarn-site.xml (We are using NameNode as ResourceManager)


6) Setup $HADOOP_HOME/etc/hadoop/slaves

First, remove localhost from the file, then add the following



Setup DataNodes


1) Make a directory to hold DataNode data

mkdir /usr/local/hadoop/hdfs_datanode

2) Setup $HADOOP_HOME/etc/hadoop/hdfs-site.xml



Note: dfs.datanode.data.dir value must be a URI

3) Setup $HADOOP_HOME/etc/hadoop/core-site.xml




Format NameNode


The above setting should be enough to set up the Hadoop cluster. Next, for the first time, you will need to format the NameNode. Use the following command to format the NameNode

hdfs namenode -format

Example output is



Note: the same command can be used to reformat your existing NameNode. But remember to clean up your datanodes hdfs folder as well.


Start NameNode


You can start Hadoop with the given script

start-dfs.sh

Example output is










Stop NameNode


You can stop Hadoop with the given script

stop-dfs.sh

Example output is



Start ResourceManager


You can start the ResourceManager, in this case, Yarn, with the given script

start-yarn.sh

Example output is




Stop ResourceManager


You can stop the ResourceManager, in this case, Yarn, with the given script

stop-yarn.sh

Example output is



Show status of Hadoop


You can use the following command to show status of Hadoop

jps

Example output is











Complete Testing


You can also do the following to perform a complete test to ensure Hadoop is running fine.






















You could access the Hadoop Resource Manager information at http://NameNode_hostname:8088



You could also access the Hadoop cluster summary at http://NameNode_hostname:50070. You should be able to see the number of datanodes being setup for the cluster.


Reference


1. http://www.server-world.info/en/note?os=CentOS_7&p=hadoop
2. http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html


Tuesday, July 11, 2017

Android - Peel Remote App "Good Night" Screen


Even since upgrading to Android 7.0 "accidentally", I saw the following "Good Night" screen at night. 



This is irritating, and apparently, this "Good Night" screen come from Peel Remote App. If you disable the Peel Remote App, the "Good Night" screen will be gone. I don't use this app anyway...



Friday, May 26, 2017

Chrome - Chrome appear out of screen / offscreen

I hate Chrome when moved out of screen and you cannot move it back. To fix that,

1. Open the Chrome window (it will be off screen somewhere)
2. Press ALT + Spacebar + X

Then above will maximize your Chrome back to your primary monitor.

Monday, April 17, 2017

Hive - URISyntaxException: Relative path in absolute URI

Problem


I encountered the following exception when I tried to start up Hive

Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D


Solution


This is configuration in hive-site.xml. Open up hive-site.xml and look for ${system:java.io.tmpdir}/${system:user.name}. If found, replace them with a proper value, e.g. /tmp/somedir. After that, run Hive again.

Oracle - Reclaim disk space

Oracle does not release disk space even if you had delete the data or tablespace. If you have enterprise manager, you should use it to  rec...