2017년 11월 6일 월요일

Setting up vim syntastic on various linux distros

I use several different notebooks at home and at work that have different Linux distros installed, namely, Archlinux, Ubuntu 16.04 and Fedora 26. As a systems engineer, using vi/vim is a necessity as it is installed by default on every *nix machine even if no other editor is available. While I prefer Emacs with Flycheck when programming, I use vim extensively day-to-day to edit shell scripts, yaml files, and other structured text. During this work a syntax-checker is very helpful for catching errors. For this I use the vim plugin syntastic.

Syntastic provides vim with linter/syntax-checking integration for various programming languages and markup formats. While it doesn't work on-the-fly like the syntax checkers in traditional IDE's or Emac's Flycheck, it does check syntax every time a file is saved with :w

Recent versions of vim have a native plugin manager enabled by default. Vim plugin files installed through your package manager (pacman, apt-get, dnf / yum) are usually written to /usr/share/vim/vimfiles/plugin/ but you can also set the plugin directory to a location under ~/.vim if you add it to your runtimepath. You can check the current setting of your runtimepath (rtp) in vim by entering:

: set rtp

runtimepath=~/.vim,/usr/share/vim/vimfiles,/usr/share/vim/vim80,/usr/share/vim/vimfiles/after,~/.vim/after,/usr/share/vim/vimfiles/plugin/
Press ENTER or type command to continue

To enable the default vim plugin manager, edit ~/.vimrc and add the following directory to your default run time path:

set rtp+=/usr/share/vim/vimfiles

Note that on Fedora, in addition to installing the vim-syntastic package, you also have to install a package for each language or DSL you want to use with syntastic; i.e. vim-syntastic-python for Python 2 and 3 linting, vim-syntastic-sh for shellcheck integration, vim-syntastic-ansible for checking ansible yaml files and playbooks, etc.

On Archlinux, instead of setting rtp, you need to specify "packpath" in your .vimrc as follows:

set packpath=/usr/share/vim/vimfiles/plugin/

If you create a ~/.vim directory (where you should place manually downloaded or git-cloned plugins), vim will also check this path as part of rtp.

You can see my personal .vimrc from my dotfiles repo at the link below:
https://github.com/gojun077/jun-dotfiles/blob/master/vimrc

Here is a screenshot of vim + syntastic with shellcheck output for a Bash shell script:




References

https://github.com/vim-syntastic/syntastic#installation

2017년 10월 26일 목요일

Set Keystone v3 API endpoints in Packstack Newton

The dev team at my current workplace has created an app which integrates with Openstack Newton however they only use the Keystone v3 API. I deployed Newton using the centos 7 release version from the repo enabled by the centos-release-openstack-newton package from EPEL. I deployed two nodes, one controller and one compute on top of CentOS 7.4

As recently as Openstack Mitaka, packstack deployment using Keystone v3 API failed when running the cinder.pp puppet manifest but this issue has been fixed in RDO Openstack Newton packstack deployment. To enable Keystone v3 API, simply edit the following in your packstack answer file:

CONFIG_KEYSTONE_API_VERSION=v3

Then run packstack with packstack --answer-file and the installation will complete successfully.

However, if you go into the Horizon dashboard Admin -> System Info menu, you will see that the Keystone endpoints are still set to v2.0. You can also verify this from the openstack commandline (make sure you have python-openstackclient package installed):

[root@newtonctrl ~(keystone_admin)]# openstack endpoint list
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
| ID                               | Region    | Service Name | Service Type   | Enabled | Interface | URL                                             |
+----------------------------------+-----------+--------------+----------------+---------+-----------+-------------------------------------------------+
...
| 1ed930a5fad64fdb93cab8c5647a8bbe | RegionOne | keystone     | identity       | True    | internal  | http://172.16.11.201:5000/v2.0                  |
| 403c6f321b364dde821c6057fc81fca4 | RegionOne | keystone     | identity       | True    | public    | http://172.16.11.201:5000/v2.0                  |

| b412ae6f0b0446dcac3d75e68a30803e | RegionOne | keystone     | identity       | True    | admin     | http://172.16.11.201:35357/v2.0                 |
...


In this situation, if you make a curl request to the above endpoints but replace v2.0 with v3, the token payload will still contain Keystone v2.0 endpoints (it simply redirects v3 requests to v2.0). Keystone will still respond on the v3 endpoint, but the payload will use v2.0 formatting. This can be a problem for apps which expect a JSON dump using v3 fields and formats. Therefore I had to manually change the Keystone endpoints to v3.

 In Openstack Newton you can create(delete), and enable(disable) endpoints using the "openstack endpoint create(delete)" and "openstack endpoint set --enable(disable) UUID" commands.

Create Keystone v3 API endpoints

[root@newtonctrl ~(keystone_admin)]# openstack endpoint create identity --region RegionOne public http://172.16.11.201:5000/v3

Repeat this for each service type (i.e. internal, admin, and public).

Disable Keystone v2.0 API endpoints

[root@newtonctrl ~(keystone_admin)]# openstack endpoint set --disable 1ed930a5fad64fdb93cab8c5647a8bbe

Repeat this for each Keystone v2.0 API endpoint UUID.

[root@newtonctrl ~(keystone_admin)]# openstack endpoint list | grep keystone
| 1ed930a5fad64fdb93cab8c5647a8bbe | RegionOne | keystone     | identity       | False   | internal  | http://172.16.11.201:5000/v2.0                  |
| 403c6f321b364dde821c6057fc81fca4 | RegionOne | keystone     | identity       | False   | public    | http://172.16.11.201:5000/v2.0                  |
| 7cf272994522455790d7dd5a0420b150 | RegionOne | keystone     | identity       | True    | internal  | http://172.16.11.201:5000/v3                    |
| ab64142376ae4aa68e832479295ed301 | RegionOne | keystone     | identity       | True    | public    | http://172.16.11.201:5000/v3                    |
| b412ae6f0b0446dcac3d75e68a30803e | RegionOne | keystone     | identity       | False   | admin     | http://172.16.11.201:35357/v2.0                 |
| e6891754ac154db1b8e32d7f5d67578a | RegionOne | keystone     | identity       | True    | admin     | http://172.16.11.201:5000/v3                    |

You can see that the v2.0 Keystone API endpoints are set to False in the "Enabled" field, while the v3 endpoints are set to True for the same field. If this is not reflected in the Horizon UI, you may have to erase your web browser cache and reload the page. I'm not sure if this issue has been fixed in RDO Ocata, but I plan to file a bug report on Red Hat Bugzilla.




2017년 7월 29일 토요일

Customizing User-Agent string when using curl or Python requests

The Linux tool cURL and the Python requests library can both be used to submit GET requests to REST API endpoints. On some sites, however, you will get 403 Forbidden or 401 Unauthorized errors unless you change your User-Agent string to something other than "curl" and "python requests". As of July 2017, I have had good results by changing my User-Agent string to "Mozilla/5.0".

For curl, the option to change the user agent string is -A or --user-agent

For Python requests, you can add the user-agent string to the headers argument in your get request:

response = requests.get("https://api.coinone.co.kr/orderbook?currency=eth", headers={'User-Agent': 'Mozilla/5.0})


2017년 7월 15일 토요일

OmegaT 4.1.1 settings for MS Translator under MS Azure

MS Translator used to be available through the Azure Apps Marketplace, but after April 2017 MS Translator is offered as part of Microsoft's Cognitive Services API.

In the past, using OmegaT with MS Translator required you to specify a client id and client secret. The corresponding variables are:

microsoft.api.client_id
microsoft.api.client_secret

With the move to MS Cognitive Services, OmegaT has a new variable which must be passed to the MS Translator API:

microsoft.api.subscription_key

From the Azure Dashboard, select your MS Translator app and then select Keys. You will see your app name and two keys. You must enter one of these keys into the variable microsoft.api.subscription_key for OmegaT to authenticate with MS Translator.

Here is my OmegaT 4.1.1 launch script on Linux:

#!/bin/bash
# Launch Script for OmegaT CAT tool
# GOOGTRANS and MSTRANS represent the API
# keys for Google Translate API v2 and
# Microsoft Translator, respectively
#
# Last Updated: 2017-07-11
# Jun Go gojun077@gmail.com

GOOGTRANS=$(<"$HOME/SpiderOak Hive/keys/googleTranslateAPIkey.pw")
MSTRANS=$(<"$HOME/SpiderOak Hive/keys/microsoftTranslatorAPIkey.pw")
OTPATH=$HOME/omegat
XMODIFIERS=@im=ibus java -jar -Xmx512M -Dgoogle.api.key="$GOOGTRANS" \
          -Dmicrosoft.api.subscription_key="$MSTRANS" \
          -Dmicrosoft.api.client_id="name_of_your_app" \
          -Dmicrosoft.api.client_secret="$MSTRANS" \
          -Dswing.crossplatformlaf=com.sun.java.swing.plaf.gtk.GTKLookAndFeel \
          "$OTPATH"/OmegaT.jar


References:
https://sourceforge.net/p/omegat/svn/9562/tree//trunk/src/org/omegat/Bundle.properties?barediff=5161c5ece88f3d0a5207336e:9561

2017년 5월 6일 토요일

RHEL 7.X DBus.Error.AccessDenied caused by permissions problem on root partition

At a client site, I have 6 nodes running Openstack Mitaka on top of RHEL 7.2. After rebooting one node, however, the networking configs in /etc/sysconfig/network-scripts were not being loaded (specifically, the OVS bridges necessary for Openstack to run, i.e. br-ex, br-int). When I attempted to manually load the network settings with

systemctl start network

I was told that the systemd unit file network.service does not exist! This file is normally generated automatically by systemd-sysv-generator at boot from legacy SystemV scripts in /etc/init.d/ and written to /run/systemd/generator.late/network, but for some reason this was not happening.

Because RHEL 7.2 was not reading my network config files, I decided to manually create the OVS bridges using the following commands:

ovs-vsctl add-br br-ex
ip link set br-ex up
ovs-vsctl add-port br-ex eno1

To use OpenVSwitch, however, the systemd's openvswitch.service must be running. When I tried to invoke the service using systemctl start openvswitch, I got the following error:

DBus.Error.AccessDenied: An SELinux policy prevents this sender from sending this message to this recipient

Also the journalctl log showed tons of auditd errors that continued to print every 3 seconds or so.

It turns out that this is a permissions problem on /! According to Redhat, the proper permissions on the root partition is 555, or r-x r-x r-x. After changing the permissions and rebooting, I no longer get the DBus.Error.AccessDenied error message. I don't know why the perm's on / have to be set as 555 (on a personal Archlinux installation without SELINUX, the perms on / are 755 rwx r-x r-x). Furthermore I don't know how the perms got changed to 555. I checked the history log on the affected nodes and there is no record of anyone changing permissions on the root partition.




References:
https://access.redhat.com/solutions/1990203 (you must register in order to access this Knowledge Base solution)

2017년 3월 11일 토요일

Generate /etc/shadow PW hash from the cli using python2 and 3

In /etc/shadow, hashed and salted passwords are stored together with the user name as follows:

myuser:$6[someSaltedHash]:...

where the number following the $ can take the values 1~6 corresponding to the following hash algorithms:

1    md5
2a   Blowfish
2y   Blowfish with correct 8-bit char handling
5    sha-256
6    sha-512

Many How-to's on the Internet recommend using mkpasswd from the expect package, but I find it is much easier to use python2 or python3 to generate the salted hash.

Python 2:

python -c 'import crypt,getpass; print crypt.crypt(getpass.getpass())'

You will then be prompted to enter your plaintext password after which a /etc/shadow compatible hash will be output.


Python 3:

python3 -c 'import crypt; print(crypt.crypt("yourpw", crypt.mksalt(crypt.METHOD_SHA512)))'

In this snippet, you simply enter your plaintext password as an argument and then a /etc/shadow compatible hash will be printed out on the terminal.

You can copy-paste this salted hash into a Kickstart (RHEL and variants) or DI preseed file (Debian and variants) for automated installations.


References:

https://access.redhat.com/solutions/221403 (requires registration)

http://serverfault.com/questions/330069/how-to-create-an-sha-512-hashed-password-for-shadow

2017년 2월 25일 토요일

Using ibus in non-GTK/QT apps like Emacs, Java, and Enlightenment/EFL

ibus is a popular Input Method Editor (IME) for Linux which I use for entering Korean and Chinese characters (via ibus-hangul). ibus has good compatibility with apps using the GTK or QT UI frameworks, but ibus sometimes behaves strangely in GUI apps that don't use these frameworks. For example, in Emacs, OmegaT (which uses OpenJDK 7 or 8) and Terminology (a nice terminal that uses the Enlightenment Foundation Libraries), every time I press SPACEBAR after a word, the SPACE character is inserted to the left of the last character. In other words:

"가나다 "

appears as

"가나 다"

To avoid this issue in non-GTK/QT apps on Linux, I launch the ibus daemon as follows:

env IBUS_ENABLE_SYNC_MODE=0 ibus-daemon -rdx

You can read an explanation of the ibus-daemon option flags in a previous post from my blog.

Brandon Schaefer @ Canonical, explains why this error occurs:

The problem seems to be that when IBUS_ENABLE_SYNC_MODE is enabled it pushes all the events through the im engine (such as ibus-hangul) and since it normally only handles Korean text it doesn't know what to do when, say a space is sent through, so it says it didn't handle that space event which then IBus handles it and commits that space BEFORE the preedit.

In addition, my .bashrc contains the following ibus settings:

##### ibus IME settings #####
export GTK_IM_MODULE=ibus
export XMODIFIERS=@im=ibus
export QT_IM_MODULE=ibus
export CLUTTER_IM_MODULE=ibus
export ECORE_IMF_MODULE=xim

ECORE_IMF_MODULE setting is for Enlightenment apps like Terminology terminal.

Once the ibus environment variables are properly set and ibus-daemon is launched with the appropriate options, you will be able to enter Korean and other Asian text into non-QT/GTK apps with ibus.

References:
https://github.com/ibus/ibus/issues/1847

https://bugs.launchpad.net/ubuntu/+source/unity/+bug/880876

2017년 2월 2일 목요일

Fix black screen in tty mode with Ubuntu 16.04.1 on ASUS Prime Z270-K with KabyLake CPU

After installing Ubuntu 16.04.1 LTS via USB in UEFI mode, I rebooted and was met by a black screen. From another machine on the local network, I was able to ssh into the newly-installed Ubuntu 16.04.1 box and noticed the following in dmesg:

[  +0.000000] Call Trace:
[  +0.000002]  [] dump_stack+0x63/0x90
[  +0.000001]  [] warn_slowpath_common+0x82/0xc0
[  +0.000001]  [] warn_slowpath_fmt+0x5c/0x80
[  +0.000014]  [] __unclaimed_reg_debug+0x80/0x90 [i915_bpo]
[  +0.000012]  [] gen9_read32+0x35e/0x390 [i915_bpo]
[  +0.000002]  [] ? __pm_runtime_resume+0x5b/0x70
[  +0.000016]  [] intel_digital_port_connected+0xf8/0x290 [i915_bpo]
[  +0.000013]  [] ? intel_display_power_get+0x3b/0x50 [i915_bpo]
[  +0.000015]  [] intel_hdmi_detect+0x4b/0x140 [i915_bpo]
[  +0.000003]  [] drm_helper_probe_single_connector_modes_merge_bits+0x235/0x4d0 [drm_kms_helper]

Google reveals many similar issues with i915 Kernel Mode Setting for Intel integrated video on newer intel CPU's (Skylake and beyond).

At the GRUB menu after rebooting, I added the kernel boot parameter nomodeset and booted with F10 after which I was able to boot to a visible login prompt now that Linux is no longer trying to automatically set the video resolution.

To make the kernel boot parameter changes permanent in GRUB menu, I had to edit /etc/default/grub and edited the following:

GRUB_CMDLINE_LINUX_DEFAULT="nomodeset"

I then ran sudo update-grub to update /boot/grub/grub.cfg

Some other hardware info:

$ lscpu
...
Model name:            Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz

$ dmidecode -t 2
...
Base Board Information
Manufacturer: ASUSTeK COMPUTER INC.
Product Name: PRIME Z270-K
Version: Rev X.0x


References:

http://askubuntu.com/questions/38780/how-do-i-set-nomodeset-after-ive-already-installed-ubuntu/38782#38782

2017년 1월 21일 토요일

Migrate from cinder loopback device to physical block device on RDO Mitaka

In PoC or test installations of RDO Mitaka via Packstack, by default the cinder-volumes LVM volume group is created in the form of a loopback file/device under /var/lib/cinder/

This might be OK for light testing, but if you plan to use cinder volumes in production you need to create the cinder-volumes VG on a real physical device.

After stopping all openstack services with openstack-service stop I used vgs and lvs to take a look at the LVM volume groups and logical volumes on my Openstack storage node (which I separated from the control node using unsupported config options in my Packstack answer file).

Despite stopping Openstack services, when I tried to use lvchange -ay and vgchange -ay to deactivate LV's in the cinder-volumes VG I kept getting error messages that some logical volumes in the VG were still active.

I finally just used gdisk to delete the problematic LVM partition housing cinder-volumes, rebooted and then created a new cinder-volumes VG as detailed in my previous post about setting up Cinder to use a physical block device.

However, after restarting Openstack services, openstack-cinder-volume.service failed to start. I examined the systemd service file for cinder-volume.service in /usr/lib/systemd/system/ and noticed that it contained the line

After = openstack-losetup.service

It turns out that /usr/lib/systemd/system/openstack-losetup.service sets up a loop device to act as a "disk" for storing the cinder-volumes Volume Group. Of course the loop device is just a file and has really bad I/O, so it should only be used for test setups.

I therefore deleted openstack-losetup.service, removed the ...losetup line from the systemd config for cinder-volume.service, and then executed systemctl daemon-reload to reload all systemd service files.

Now openstack-cinder-volume.service starts without any errors and doesn't require a loopback device.