Skip to main content

FOREIGN TABLES


FOREIGN TABLES/DATA WRAPPERS
===============================

Foreign data wrapper is a library which understand the heterogeneous database information. For example, PostgreSQL does not understand the MYSQL data structure/information since both engines have different mechanism. If we want to get any heterogeneous database information then we need to configure the respective fdw(Foreign Data Wrapper) into the PostgreSQL Library location.

Please find the below link, which gives you all the available Foreign Data Wrappers.
http://wiki.postgresql.org/wiki/Foreign_data_wrappers

Here we have chosen MYSQL table as a source to PostgreSQL. Below are the steps.

1) Install mysql and mysql-devel using yum .

yum install mysql*
2) Install PostgreSQL 9.1 through EnterpriseDB graphical installer. 3) Get the MYSQL FDW from the below link.
https://github.com/dpage/mysql_fdw/archive/master.tar.gz
4) set the "PATH" as shown below.
export PATH=<PostgreSQL 9.1 Bin>:<Mysql Bin>:$PATH;
Ex:-
[root@localhost mysql_fdw-master]# echo $PATH 
/opt/PostgreSQL/9.1/bin/:/usr/bin/mysql:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
5) Make & Make Install
[root@localhost mysql_fdw-master]# make USE_PGXS=1
[root@localhost mysql_fdw-master]# make USE_PGXS=1 install
6) Create an Extension & Server as below.
postgres=# create EXTENSION mysql_fdw ;
CREATE EXTENSION

postgres=# CREATE SERVER mysql_svr FOREIGN DATA WRAPPER mysql_fdw OPTIONS (address '127.0.0.1', port '3306'); --MySql Port Default 3306
7) Create USER Mapping from PUBLIC users to "MySql Root".
CREATE USER MAPPING FOR PUBLIC
SERVER mysql_svr
OPTIONS (username 'root', password 'root');
CREATE USER MAPPING
8) Create Foreign Table as below.
postgres=# CREATE FOREIGN TABLE TEST(T INT) SERVER mysql_svr OPTIONS(TABLE 'DINESH.XYZ'); --Dinesh is a database & XYZ is a table.
CREATE FOREIGN TABLE
9) From MYSQL
mysql> \u DINESH
Database changed
mysql> SELECT * FROM XYZ;
+------+
| T    |
+------+
|    1 |
|    2 |
|    3 |
+------+
3 rows in set (0.00 sec)
10) From PostgreSQL
postgres=# select * from test;
 t 
---
 1
 2
 3
(3 rows)

postgres=# explain analyze select * from test;
                                             QUERY PLAN                                             
----------------------------------------------------------------------------------------------------
 Foreign Scan on test  (cost=10.00..13.00 rows=3 width=4) (actual time=0.211..0.212 rows=3 loops=1)
   Local server startup cost: 10    MySQL query: SELECT * FROM DINESH.XYZ  Total runtime: 0.675 ms (4 rows)

దినేష్ కుమార్ 
Dinesh Kumar

Comments

  1. So how do you enumerate a list of these foreign tables via TSQL?

    ReplyDelete

Post a Comment

Popular posts from this blog

Pgpool Configuration & Failback

I would like to share the pgpool configuration, and it's failback mechanism in this post.

Hope it will be helpful to you in creating pgpool and it's failback setup.

Pgpool Installation & Configuration

1. Download the pgpool from below link(Latest version is 3.2.1).
    http://www.pgpool.net/mediawiki/index.php/Downloads


2. Untart the pgpool-II-3.2.1.tar.gz and goto pgpool-II-3.2.1 directory.

3. Install the pgpool by executing the below commands:

./configure ­­prefix=/opt/PostgreSQL92/ ­­--with­-pgsql­-includedir=/opt/PostgreSQL92/include/ --with­-pgsql­-libdir=/opt/PostgreSQL92/lib/ make make install 4. You can see the pgpool files in /opt/PostgreSQL92/bin location.
/opt/PostgreSQL92/bin $ ls clusterdb   droplang  pcp_attach_node  pcp_proc_count pcp_systemdb_info  pg_controldata  pgpool pg_test_fsync pltcl_loadmod  reindexdb createdb    dropuser  pcp_detach_node  pcp_proc_info createlang  ecpg      pcp_node_count   pcp_promote_node oid2name  pcp_pool_status  pcp_stop_pgpool  …

pgBucket - A new concurrent job scheduler

Hi All,

I'm so excited to announce about my first contribution tool for postgresql. I have been working with PostgreSQL from 2011 and I'm really impressed with such a nice database.

I started few projects in last 2 years like pgHawk[A beautiful report generator for Openwatch] , pgOwlt [CUI monitoring. It is still under development, incase you are interested to see what it is, attaching the image here for you ],


pgBucket [Which I'm gonna talk about] and learned a lot and lot about PostgreSQL/Linux internals.

Using pgBucket we can schedule jobs easily and we can also maintain them using it's CLI options. We can update/insert/delete jobs at online. And here is its architecture which gives you a basic idea about how it works.


Yeah, I know there are other good job schedulers available for PostgreSQL. I haven't tested them and not comparing them with this, as I implemented it in my way.
Features are: OS/DB jobsCron style sytaxOnline job modificationsRequired cli options

pgBucket 2.0 Beta Is Ready

I am so glad to announce pgBucket 2.0 beta version, which is evolved from the version 1.0. Below are this version feature highlights and hoping that everybody likes these features.

Event jobs (Cascading jobs)Dedicated configuration fileExtended table printAuto job disableCustom job failureDedicated connection poolerImproved the daemon stability/coding standards Please find the below URL for the features review. https://bitbucket.org/dineshopenscg/pgbucket/overview
--Dinesh