An SQL join clause combines records from two or more tables. This operation is very common in data processing and understanding of what happens under the hood is important. There are several common join types: INNER
, LEFT OUTER
, RIGHT OUTER
, FULL OUTER
and CROSS
or CARTESIAN
.
Join which uses the same table is a self-join. If an operation uses equality operator, it is equi-join, otherwise, it is non-equi-join.
This article covers different join types in Apache Spark as well as examples of slowly changed dimensions (SCD) and joins on non-unique columns.
Sample data
All subsequent explanations on join types in this article make use of the following two tables, taken from Wikipedia article. The rows in these tables serve to illustrate the effect of different types of joins and join-predicates.
Employees table has a nullable column. To express it in terms of statically typed Scala, one needs to use Option type.
Department table does not have nullable columns, type specification could be omitted.
Inner join
Following SQL code
SELECT * FROM employee INNER JOIN department ON employee.DepartmentID = department.DepartmentID;
could be written in Spark as
Beautiful, is not it? Spark automatically removes duplicated “DepartmentID” column, so column names are unique and one does not need to use table prefix to address them.
Left outer join
Left outer join is a very common operation, especially if there are nulls or gaps in a data. Note, that column name should be wrapped into scala Seq
if join type is specified.
Other join types
Spark allows using following join types: inner
, outer
, left_outer
, right_outer
, leftsemi
. The interface is the same as for left outer join in the example above.
For cartesian join column specification should be omitted:
Warning: do not use cartesian join with big tables in production.
Join expression, slowly changing dimensions and non-equi join
Spark allows us to specify join expression instead of a sequence of columns. In general, expression specification is less readable, so why do we need such flexibility? The reason is non-equi join.
One application of it is slowly changing dimensions. Assume there is a table with product prices over time:
There are two products only: steak and fish, price of steak has been changed once. Another table consists of product orders by day:
Our goal is to assign an actual price for every record in the orders table. It is not obvious to do using only equality operators, however, spark join expression allows us to achieve the result in an elegant way:
This technique is very useful, yet not that common. It could save a lot of time for those who write as well as for those who read the code.
Inner join using non primary keys
Last part of this article is about joins on non unique columns and common mistakes related to it. Join (intersection) diagrams in the beginning of this article stuck in our heads. Because of visual comparison of sets intersection we assume, that result table after inner join should be smaller, than any of the source tables. This is correct only for joins on unique columns and wrong if columns in both tables are not unique. Consider following DataFrame with duplicated records and its self-join:
Note, that size of the result DataFrame is bigger than the source size. It could be as big as n2, where n is a size of source.
Conclusion
The article covered different join types implementations with Apache Spark, including join expressions and join on non-unique keys.
Apache Spark allows developers to write the code in the way, which is easier to understand. It improves code quality and maintainability.
'spark,kafka,hadoop ecosystems > apache spark' 카테고리의 다른 글
transformation and actions (0) | 2018.11.20 |
---|---|
spark memory (0) | 2018.11.20 |
spark vs hadoop page rank 실행시간 비교 (0) | 2018.11.20 |
local 네트워크에서 kafka 동작이안될때 - 방화벽 (0) | 2018.11.20 |
Spark Structed Streaming - 전체내용 번역 및 정리 (0) | 2018.08.14 |