

- #Install apache spark on old pc install#
- #Install apache spark on old pc driver#
- #Install apache spark on old pc archive#
$HIVE_HOME/bin/schematool -dbType -initSchema Now we need to initialize the schemas for metastore. For example, if you are installing Hive in C driver, the command line will be: C:\WINDOWS\system32>mklink /J C:\cygdrive\c\ C:\ Initialize metastore You need to change the drive to the appropriate drive where you are installing Hive. In this way, ‘F:\cygdrive\f’ will be equal to ‘F:\’. Junction created for F:\cygdrive\f\ > F:\ Open Command Prompt (Run as Administrator) and then run the following command:Ĭ:\WINDOWS\system32>mklink /J F:\cygdrive\f\ F:\.
#Install apache spark on old pc driver#
Create a folder in F: driver named cygdrive.In my system, Hive is installed in F:\DataAnalytics\ folder. Hadoop fs -chmod g+w /user/hive/warehouse Create a symbolic linkĪs Java doesn’t understand Cygwin path properly, you may encounter errors like the following: JAR does not exist or is not a normal file: F:\cygdrive\f\DataAnalytics\apache-hive-3.0.0-bin\lib\hive-beeline-3.0.0.jar Open Command Prompt (not Cygwin) and then run the following commands: hadoop fs -mkdir /tmp bashrc so that you don’t need to run these command manually each time when you launch Cygwin: vi ~/.bashrc Run the following commands in Cygwin to setup the environment variables: export HADOOP_HOME='/cygdrive/f/DataAnalytics/hadoop-3.0.0'Įxport HIVE_HOME='/cygdrive/f/DataAnalytics/apache-hive-3.0.0-bin'Įxport HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/*.jar

$ tar -xvzf apache-hive-3.0. Setup environment variables Open Cygwin terminal, and change directory (cd) to the folder where you save the binary package and then unzip: $ cd /cygdrive/f/DataAnalytics

#Install apache spark on old pc archive#
If you cannot find the package, you can download from the archive site too. For my case, I am saving to ‘F:\DataAnalytics’. Save the downloaded package to a local drive. Download Binary Packageĭownload the latest binary from the official website: Thus you have to use Cygwin or any other bash/sh compatible tools to run the scripts. From Hive 2.3.0, the binary doesn’t include any CMD file anymore.
#Install apache spark on old pc install#
Please install Cygwin so that we can run Linux shell scripts on Windows. Hadoop 3.2.1 is recommended as that one provides very detailed steps that are easy to follow.
