How to Be More Assertive at Work (Without Being a Jerk)

Have you ever admired a co-worker who’s able to navigate challenging situations with ease and professionalism, no matter the politics and difficult personalities involved? You know the type: She has a Teflon-like ability to deflect anger and frustration in the problem-solving process and doesn’t settle for an outcome that would sacrifice her self-respect or clout among colleagues.

What she’s exhibiting is a key personality attribute that’s important in both business and life: assertiveness. For those of us who avoid confrontation like the plague—or, on the flipside, those of us who have hair-trigger tempers—this calm-yet-effective, agreeable-yet-firm temperament seems superhuman. Assertiveness requires skill and can take time to cultivate, but it’s a quality you can (and should) aspire to master.

Put simply, being assertive is a happy medium between the two extremes of aggressive and passive. While aggressive people adopt the “my way or the highway” stance, coming off as hostile and abrasive, passive people can be pushovers, giving up their power and allowing themselves to be taken advantage of, creating a surefire recipe for burnout and resentment.

Assertive people, on the other hand, tend to seek out and create win-win scenarios. Assertive people understand the value of making their desires and beliefs known, but their pride isn’t damaged if their solution isn’t the one that comes out on top. Confident and assured, these people approach situations with a healthy dose of objectivity, and as a result, are able to communicate clearly and work through challenges in a low-stress, no-drama, and self-honoring way.

Many people find it challenging to project assertiveness precisely because it requires you to walk a fine line between being pushy and pacifying. To help you navigate this tricky road, here are a few examples of how to be more assertive in some common workplace scenarios—without turning into the office jerk.

Situation #1: Getting the Team Behind Your Plan

Your team is in charge of launching a new sales campaign, and you have a killer idea. The team meets to discuss how to get started, and you’re excited to propose your approach.

  • Passive Approach: You wait for your boss to make the first suggestion, then take the path of least resistance by agreeing, rather than putting your idea on the table or even suggesting ways to improve upon her strategy.
  • Aggressive Approach: You immediately present your “perfect” idea as the one the team needs to adopt and, without taking a breath, begin assigning tasks. If anyone tries to suggest an alternative, you shake your head and say, “That won’t work.” All the while, you’re pretending not to notice the eye-rolls happening around the table.
  • Assertive Approach: As you listen to the various suggestions your colleagues are floating, you both acknowledge their strong points and assume a role in solving potential challenges. You might say, for example, “It’s a great idea to track prospective client interactions. What if we did it over six months instead of three? That would allow us to collect more data and make better decisions for the next fiscal year.”

In this last scenario, you’ve stated your case in a way that acknowledges others’ perspectives and backed up your ideas with factual reasoning, rather than emotions. You’ve successfully contributed value to the conversation, but not at the cost of making other team members feel unvalued.

Situation #2: It’s Time for a Raise, But Your Boss Isn’t Making Any Moves

After asking for a raise during a check-in with your boss, she says that [you’ll have to wait at least another six months]( The company’s just not able to give raises right now, but she assures you your performance is such that you’ll be considered for a salary bump when the time is right.

  • Passive Approach: You swallow your disappointment and nervously utter, “Oh, that’s fine—no problem,” to assuage the awkwardness of the situation. But later, you go home and complain about it for hours, because you feel it’s completely unjust.
  • Aggressive Approach: After being told you’ll need to wait for a raise, you inform your boss that you’re going to begin to look for opportunities elsewhere—where someone will treat you like you deserve to be treated.
  • Assertive Approach: Because you respect yourself and your need to be compensated fairly as much as you want to understand your boss’ reasoning, you don’t let your bruised ego get the best of you and lash out. Instead, you ask for more clarity on the company’s future and define tangible goals and targets that you can review when you revisit your salary request down the road.

In the assertive approach, you’re showing resilience by responding in a proactive, future-oriented manner, signaling maturity, level-headedness, and a commitment to the company.

Situation #3: Managing the Team for Top Results

One of your direct reports is seriously missing the mark. His deliverables are sloppy, other colleagues are starting to complain about having to pick up his slack, and on top of all that, he rolls in late every day. It’s time to step in.

  • Passive Approach: Next time he turns in a terrible first draft of a report, you stay up until 2 AM redoing it on your own—and then fume about his poor performance to other colleagues when he’s not around.
  • Aggressive Approach: Go full-on Jerry McGuire on him, demanding to know why he’s so stupid, assuring him that he’s unhirable anywhere else, and that you’re doing him a favor by not letting him go—all but firing him on the spot.
  • Assertive Approach: In a private meeting, you clearly communicate why his work isn’t acceptable, pointing to his failure to satisfy core procedural requirements, but are careful not to take aim at his personal qualities. Harnessing your emotional intelligence and empathy, you invite him to let you know if there is anything else going on. Perhaps he’s struggling with personal issues that are detracting his focus from work. Or, maybe he’s not clear on your instructions. To keep projects on track and better your relationship, you schedule a weekly meeting to check in and create a channel for clear communication.

In the final option, you’ve taken control of the situation instead of letting the problem linger and have presented a scenario in which both you and your report win.

Learning how to be more assertive—sticking up for yourself without being a total jerk—will not only earn you respect among co-workers, but it’ll also reduce your stress, making you feel more confident about yourself and your interactions with others. This high road that assertive people take is where the best outcomes happen—so by training yourself to look for the win-win opportunities in challenging situations, you’ll come out on top.

retrieved from:

SISS to SQL Server Data Type Translations

It can be extremely confusing when you first encounter SSIS data types.  At first glance, they seem to be nothing like SQL Server data types you love and know.  That’s why I’ve provided below a conversion chart of SSIS data types to SQL Server data types.  This information is readily available on MSDN but it always seems difficult to find.  Hope this helps!

SISS to SQL Server Data Type Translations

SQL – Queries Tuning and Optimization Techniques

In SQL, it is very difficult to write complex SQL queries involving joins across many (at least 3-4) tables and involving several nested conditions because a SQL statement, once it reaches a certain level of complexity, is basically a little program in and of itself.

A database index is a data structure that improves the speed of operations on a database table. Indexes can be created using one or more columns of a database table, providing the basis for both rapid random lookups and efficient access of ordered records. Indexing is incredibly important when working with large tables, however, occasionally smaller tables should be indexed, if they are expected to grow.

Try to consistently indent and don’t be afraid to use multiple lines. You don’t have to write it all at once. Complex queries can sometimes just be a collection of simple queries. You need to follow some basic guidelines and Take the time to think these through such as-

List all of the columns that are to be returned

  1. List all of the columns that are used in the WHERE clause
  2. List all of the columns used in the JOINs (if applicable)
  3. List all the tables used in JOINs (if applicable)
  4. Get the correct records selected first
  5. Save the complex calculations for last
  6. If you do use a Common Table Expression (CTE), be aware that the query only persists until the next query is run, so in some cases where you are using the CTE in multiple queries, it might be better for performance to use a temp table.


Once you have the above information organized into this easy-to-comprehend form, it is much easier to identify those columns that could potentially make use of indexes when executed.

It makes a big difference to really understand how the data is combined, selected, filtered, and output. Here, query Optimization tricks come into the picture to increase the performance of the program or software. There are a lot of guideline points to tune your query which do work as the boost of the query performance.  These guideline points are mentioned below:

  1. SET NOCOUNT ON at the beginning of each stored procedure you write. This statement should be included in every stored procedure, trigger, etc. that you write.
  2. The SQL query becomes faster if you use the actual columns names in SELECT statement instead of than ‘*’.
  3. HAVING clause is used to filter the rows after all the rows are selected if you are using aggregation functions. It is just like a filter. Do not use HAVING clause for any other purposes.
  4. It is the best practice to avoid sub queries in your SQL statement and try to minimize the number of subquery block in your query if possible.
  5. Use operator EXISTS, IN and table joins appropriately in your query. The reason is- Usually IN has the slowest performance
  6. IN is efficient when most of the filter criteria are in the sub-query.
  7. EXISTS is efficient when most of the filter criteria is in the main query.
  8. Use EXISTS instead of DISTINCT when using joins which involves tables having one-to-many relationship.
  9. Be careful while using conditions in WHERE clause.
  10. To write queries which provide efficient performance follow the general SQL standard rules.
  11. Use single case for all SQL verbs
  12. Begin all SQL verbs on a new line
  13. Separate all words with a single space
  14. Right or left aligning verbs within the initial SQL verb
  15. Indexes have the advantages as well as disadvantages as given below-
  16. Do not automatically add indexes on a table because it seems like the right thing to do. Only add indexes if you know that they will be used by the queries run against the table.
  17. Indexes should be measured on all columns that are frequently used in WHERE, ORDER BY, GROUP BY, TOP and DISTINCT clauses.
  18. Do not add more indexes on your OLTP tables to minimize the overhead that occurs with indexes during data modifications.
  19. Drop all those indexes that are not used by the Query Optimizer, generally.
  20. If possible, try to create indexes on columns that have integer values instead of characters. Integer values use less overhead than character values.
  21. To provide up-to-date statistics, the query optimizer needs to make smart query optimization decisions. You will generally want to leave the “Auto Update Statistics” database option on. This helps to ensure that the optimizer statistics are valid, ensuring that queries are properly optimized when they are run.
  22. If you want to boost the performance of a query that includes an AND operator in the WHERE clause, consider the following:
  23. Of the search criteria in the WHERE clause, at least one of them should be based on a highly selective column that has an index.
  24. If at least one of the search criteria in the WHERE clause is not highly selective, consider adding indexes to all of the columns referenced in the WHERE clause.
  25. If none of the columns in the WHERE clause are selective enough to use an index on their own, consider creating a covering index for this query.
  26. queries that include either the DISTINCT or the GROUP BY clauses can be optimized by including appropriate indexes. Any of the following indexing strategies can be used:
  27. Include a covering, non-clustered index (covering the appropriate columns) of the DISTINCT or the GROUP BY clauses.
  28. Include a clustered index on the columns in the GROUP BY clause.
  29. Include a clustered index on the columns found in the SELECT clause.
  30. Adding appropriate indexes to queries that include DISTINCT or GROUP BY is most important for those queries that run often.
  31. When you need to execute a string of Transact-SQL, you should use the sp_executesql stored procedure instead of the EXECUTE statement.
  32. When calling a stored procedure from your application, it is important that you call it using its qualified name.
  33. Use stored procedures instead of views because they offer better performance and don’t include code, variable or parameters that don’t do anything.
  34. If possible, avoid using SQL Server cursors. They generally use a lot of SQL Server resources and reduce the performance and scalability of your applications.
  35. Instead of using temporary tables, consider using a derived table instead. A derived table is the result of using a SELECT statement in the FROM clause of an existing SELECT statement. By using derived tables instead of temporary tables, you can reduce I/O and often boost your application’s performance.
  36. Don’t use the NVARCHAR or NCHAR data types unless you need to store 16-bit character (Unicode) data. They take up twice as much space as VARCHAR or CHAR data types, increasing server I/O and wasting unnecessary space in your buffer cache.
  37. If you use the CONVERT function to convert a value to a variable length data type such as VARCHAR, always specify the length of the variable data type. If you do not, SQL Server assumes a default length of 30.
  38. If you are creating a column that you know will be subject to many sorts, consider making the column integer-based and not character-based. This is because SQL Server can sort integer data much faster than character data.
  39. Don’t use ORDER BY in your SELECT statements unless you really need to, as it adds a lot of extra overhead. For example, perhaps it may be more efficient to sort the data at the client than at the server.
  40. Don’t return more data (both rows and columns) from SQL Server than you need to the client or middle-tier and then further reduce the data to the data you really need at the client or middle-tier. This wastes SQL Server resources and network bandwidth.
To tune our SQL queries, understanding our database does play the most important role. In SQL, typically each table column has an associated data type. Text, Integer, Varchar, Date, and more, are typically available types for developers to choose from. When writing SQL statements, make sure you choose the proper data type for the column. Sometimes it’s easier to break up subgroups into their own select statement. To write a query, we need to know about the actual need for the query and scope of the query also.




read more:

Quick Tricls: Column and block text selection using SSMS

There are several ways to select text as shown below, including the ability to select and edit columns.

Using SHIFT to Select Text

It is well known that using the SHIFT key you can perform normal text selection in SSMS.

If you put your cursor to the left of “dbo.DimEmployee” and hold the SHIFT key and then put your cursor at the end of “dbo.DimReseller” it will select the first three lines of code as shown below.

ssms shift select

Using SHIFT+ALT to Select Columns

If you would like to select columns or blocks then Microsoft SQL Server offers a solution for you. You can use the key shortcut SHIFT+ALT as described in the following steps. Please note that this feature works using SSMS for SQL Server 2008 and up.

Place your cursor to the left of “dbo.DimEmployee”, press SHIFT+ALT then click at the end of “dbo” in “dbo.DimProductCategory”. This will select columns or blocks in SQL Server Management Studio as shown below.

ssms shift alt select

Using SHIFT+ALT to Select Columns and Insert Text

In SSMS for SQL Server 2012 and up, you can also use SHIFT+ALT to insert text in this block mode.

First place the cursor in the first row where you would like to insert the text (to the left dbo.DimEmployee in our example). Press SHIFT+ALT and click in the last line where you would like to append this text (left of dbo.DimProductCategory). Now type “SELECT * FROM ” and this text will be inserted for each line as shown below.

If you would like to select columns or blocks then Microsoft SQL Server offers a solution for you

Using CTRL+SHIFT+END to Select Text

If you want to select all text from a starting point to the end you can use CTRL+SHIFT+END.

Put your cursor at the beginning point and press CTRL+SHIFT+END to select all text from that point to the end of the text as shown below.

ssms ctrl shift end to select

Using CTRL+SHIFT+HOME to Select Text

If you want to select all text from a starting point to the beginning you can use CTRL+SHIFT+HOME.

Put your cursor at the beginning point and press CTRL+SHIFT+HOME to select all text from that point to the beginning of the text as shown below.

ssms ctrl shift home select

Using CTRL+A to Select All Text

If you want to select all text you can use CTRL+A.

Just press CTRL+A anywhere in the query editor and this will select all text as shown below.

ssms ctrl a to select all text


Read more:


SQL Server Data Types you Must Know

Why data types are important

  1. The data is stored in the database in a consistent and known format.
  2. Knowing the data type allows you to know which calculations and formulations you can use on the column.
  3. Data types affect storage. Some values take up more space when stored in one data type versus another.  Take our age tables above for example.
  4. Data types affect performance. The less time the database has to infer values or convert them the better.  “Is December, 32, 2015 a date?”

Commonly used SQL Server Data Types

There are over thirty different data types you can choose from when defining columns.  Many of these are set up for very specific jobs such as storing images, and others more suitable to general use.

Here is the data types you’ll most frequently encounter in your everyday use of SQL.  These are:

  • INT
  • BIT

INT – Integer Data Type

The integer data type is used to store whole numbers.  Examples include -23, 0, 5, and 10045.  Whole numbers don’t include decimal places.  Since SQL server uses a number of computer words to represent an integer there are maximum and minimum values which it can represent.  An INT datatype can store a value from -2,147,483,648 to 2,147,483,647.

Practical uses of the INT data type include using it to count values, store a person’s age, or use as an ID key to a table.

But INT wouldn’t be so good to keep track of a terabyte hard drive address space, as the INT data type only goes to 2 billion and we would need to track into the trillions.  For this you could use BIGINT.

The INT data type can be used in calculations.  Since DaysToManufacture is defined as INT we can easily calculate hours by multiplying it by 24:

       DaysToManufacture * 24 as HoursToManufacture
FROM   Production.Product

Here you can see the results

Use of INT to perform calculations.

There are many operations and functions you can use with integers which we’ll cover once we dig into functions.

VARCHAR and NVARCHAR – Text Values

Both VARCHAR and NVARCHAR are used to store variable length text values.  “VARCHAR” stands for variable length character.

The number of characters to store in a VARCHAR or NVARCHAR are defined within the column.   For instance as you can see in the following column definition from the object explorer, the product name is defined to hold fifty characters.

VARCHAR definition shown in SQL Server Management Studio

What makes VARCHAR popular is that values less than fifty characters take less space.  Only enough space to hold the value is allocated.  This differs from the CHAR data type which always allocates the specified length, regardless of the length of the actual data stored.

The VARCHAR datatype can typically store a maximum of 8,000 characters.  The NVARCHAR datatype is used to store Unicode text.  Since UNICODE characters occupy twice the space, NVARCHAR columns can store a maximum of 4,000 characters.

The advantage NVARCHAR has over VARCHAR is it can store Unicode characters.  This makes it handy to store extended character sets like those used for languages such as Kanji.

If your database was designed prior to SQL 2008 you’ll most likely encounter VARCHAR; however, more modern databases or those global in nature tend to use NVARCHAR.

DATETIME – Date and Time

The DATETIME data type is used to store the date and time.  An example of a DATATIME value is

1968-10-23 1:45:37.123

This is the value for October 23rd, 1968 at 1:45 AM.  Actually the time is more precise than that.  The time is really 45 minutes, 37.123 seconds.

In many cases you just need to store the date.  In these cases, the time component is zeroed out.  Thus, November 5th, 1972 is

1972-11-05 00:00:00.000

A DATETIME can store dates from January 1, 1753, through December 31, 9999.  This makes the DATETIME good for recording dates in today’s world, but not so much in William Shakespeare’s.

As you get more familiar with the various SQL built-in functions you’ll be able to manipulate the data.  To give you a glimpse, we’ll use the YEAR function to count employees hired each year.  When given a DATETIME value, the YEAR function return the year.

The query we’ll use is

SELECT   YEAR(HireDate),
FROM     HumanResources.Employee

And here are the results

Use YEAR on DATETIME data type

The benefit is the DATETIME type ensures the values are valid dates.  Once this is assured, we’re able to use a slew of functions to calculate the number of days between dates, the month of a date and so on.

We’ll explore these various functions in detail in another blog article.

DECIMAL and FLOAT – Decimal Points

As you may have guessed DECIMAL and FLOAT datatypes are used to work with decimal values such as 10.3.

I lumped DECIMAL and FLOAT into the same category, since they both can handle values with decimal points; however, they both do so differently:

If you need precise values, such as when working with financial or accounting data, then use DECIMAL.  The reason is the DECIMAL datatype allows you to define the number of decimal points to maintain.


DECIMAL data types are defined by precision and scale.  The precision determine the number of total digits to store; whereas, scale determine the number of digits to the right of the decimal point.

A DECIMAL datatype is specified as DECIMAL(precision,scale).

A DECIMAL datatype can be no more than 38 digits.  The precision and scale must adhere to the following relation

0 <= scale <= precision <= 38 digits

In the Production.Product table, the weight column’s datatype is defined as DECIMAL(8,2).  The first digit is the precision, the second the scale.

Weight is defined to have eight total digits, two of them to the right of the decimal place.  We’ll the following sample query to illustrate how this data type.

FROM     Production.Product
WHERE    Weight BETWEEN 29.00 and 189.00

The results follow:

Using DECIMAL data type to display results


Where DECIMAL datatypes are great for exact numbers, FLOATs are really good for long numeric values.  Though a DECIMAL value can have 38 digits total, in many engineering and scientific application this is inadequate.  For scientific applications where extreme numeric values are encountered, FLOAT rises to the top!

FLOATS have a range from – 1.79E+308 to 1.79E+308.  That means the largest value can be 179 followed by 306 zeros (large indeed!).

Because of the way float data is stored in the computer (see IEEE 754 floating point specification) the number stored is an extremely close approximation.  For many application this is good enough.

Because of the approximate behavior, avoid using <> and = operators in the WHERE clause.  Many a DBA has been burned by the statement.

WHERE mass = 2.5

Their expectation are dashed when mass is supposed to equal 2.5, but really, in the computer it is stored as 2.499999999999999; therefore, not equal to 2.500000000000000!

That is the nature of floating points and computers.  You and I see 2.499999999999999 and think for practical purposes it is 2.5, but to the computer, were off just a bit.  J

BIT – Boolean or Yes/No values

There’s times when you just need to store whether something “is” or “is not.”  For instance, whether an employee is active.  It is in these cases that the BIT datatype comes to its own.  This data type be one of three states: 1, 0, or NULL.

The value of 1 signifies TRUE and 0 FALSE.

In this query we’re listing all salaried position job titles

FROM   HumanResources.Employee
WHERE  SalariedFlag = 1

Here are the results

Using the BIT data type in Searches

We could have also use ‘True’ instead of 1.  Here is the same example using ‘True’

FROM   HumanResources.Employee
WHERE  SalariedFlag = 'True'

And the opposite using ‘False’

FROM   HumanResources.Employee
WHERE  SalariedFlag = 'False'

I tend to stick with 1 and 0, since it is easier to type, but if you’re going for readability, then ‘True’ and ‘False’ are good options.

Read more:

Data Modelling Examples

Exercise 1

An art researcher has asked you to design a database to record details of ARTISTS and the MUSEUM in which their PAINTINGS are displayed. For each painting, the researcher wants to know the size of the canvas, year painted, title, and style. The nationality, date of birth and death of each artist must be recorded. For each museum, record details of its location and speciality, if it has one.


Exercise 2

The president of a BOOK wholesaler has told you that she wants information about the PUBLISHERS, AUTHORS and books that she carries. She knows that publishers publish many books and books can be written by one or more authors (obviously authors can also write more than one book). In order to track accurate information about the publisher, the wholesaler wants to know the publishers’ name, their phone number and their address. They also want to know who to call when they need to contact the publisher. The wholesaler also wants to know the book titles, the ISBN number, the date it was published, the genre, and the author of the book. She also wants some basic information about the authors.



Digital natives and digital immigrants

Image result for Digital natives and digital immigrants

Digital natives’ are generally born after the 1980s and they are comfortable in the digital age, because they grew up using technology.

Digital immigrants’ are those who are born before 1980s and they are fearful about using technology. ‘Digital immigrants’ are the older crew, they weren’t raised in a digital environment.

The term digital immigrant mostly applies to individuals who were born before the spread of the digital technology and who were not exposed to it at an early age.

Digital natives are the opposite of digital immigrants, they have been interacting with technology from childhood. According to Prensky, digital natives are the generation of young people who are “native speakers” of the digital language of computers, video games and the Internet.

Millennials were born between the 1980s and 2000s. Those who were born after 2000 are considered Generation Z. In the recent years researchers observed two generations: those born after the 1980s, and those born after 1993, and the results were that the younger group had more positive attitudes toward the Internet and lower anxiety scores about the Internet and higher web, e-mail and social media usage.

Studies say that digital natives’ brains are more actively engaged while scrolling through a webpage than while reading printed text.

New technologies have been a defining feature in the lives of younger generations in a way that they predict a fundamental change in the way young people communicate, socialize, create and learn. The Internet has reshaped the way we search for information and the way we think.

Digital natives see everyone on the equal level and are not dividing the world into hierarchies, they view the world horizontally. They cross boundaries and embrace the benefits of sharing with each other. Those values exist because of what they are driven by. We can learn a lot about digital native generations because their world is a genuine democracy and equality. They reject centralized and control-based forms of governance. More aggressive, competitive and result-obsessed generation, the advantage is their productivity. The difference between digital natives and digital immigrants is that digital immigrants are goal oriented and digital natives are value oriented. Digital natives like to parallel process and multi-task.

Because of interacting with technology, digital natives “think and process information fundamentally differently” (Prensky) to digital immigrantsDigital natives, according to Prensky, process information quickly, enjoy multi-tasking and gaming, while digital immigrants process information slowly, working on one thing at a time and do not appreciate less serious approaches to learning. This divide, Prensky argued, is the greatest problem facing education today and teachers must change the way they teach in order to engage their students. Children raised with the computer think differently. They develop hypertext minds. There is a need for education to change in order to create better generation expectations. Prensky claims the digital native is becoming the dominant global demographic, and the digital immigrant is in decline.

The thing is that digital natives first check their social platforms, not TV. They would rather be engaged than marked to something, they do not care if the content is professionally produced, but that it is authentic and on their level. They develop their culture — IT culture.

Learning from the natives

From the natives, the immigrants can learn to be more open and willing to engage with learners of differing backgrounds. They can learn from the natives how to sift through and focus resources, which are aplenty and are not as overwhelming for the native. They can learn to scale the learning and create what is possible.

Learning from the immigrants

Digital immigrants can teach natives to achieve goals quickly. They can help the “techno-wizards” scale the learning and create what is possible. They can look at the existing institutions and re-purpose them or rethink their vitality. A Native may be able to offer great ideas for layouts, image, design and labeling, while the immigrant would contribute their knowledge to storytelling and the value of including worthy artifacts.

Digital immigrants’ groups:

Avoiders: they prefer a relatively minimal technology, or technology-free lifestyle. They do not have an email account and/or smartphones and tend to have deadlines. Social media is too much for them and they do not see the value in these activities.

Reluctant adopters: they accept technology and are trying to engage with it, but feel unintuitive and hard to use it. They have a cell phone but do not use texting, occasionally they use Google but do not have a Facebook account but they check their emails and use online banking.

Enthusiastic adopters: they are digital immigrants who have the potential to keep up with natives. They embrace technology and they may be high-tech executives, programmers and business people. This group sees the value of technology, they use Facebook and check emails regularly and technology makes them excited. If they are doing business, they have a website.

Digital natives grouping:

Avoiders: even though they were born in the digital world, some young people do not feel an affinity for digital technologies and Facebook. Mobile technologies do not enamor them. They have cell phones, but do not use email and social media accounts.

Minimalists: they use technology minimally and when they perceive it necessary. They search information on Google if they have to and purchase online if they cannot buy something at a local store. They check their Facebook account once a day or every couple of days.

Enthusiastic participants are the most of the digital natives. They enjoy technology and gadgets. They use Facebook all day long and have other social media accounts, watching YouTube and movies online as much as possible. The first thing they do when they want to know something is: turn to Google. This group is easier to reach via social media rather than cell phones. They thrive on instant communication and own a smartphone for constant access to the Web.

So how can people from these two groups work together? How can digital immigrants teach digital natives and vice versa?

Some digital immigrants surpass digital natives in tech-savviness, but there is a belief that an early exposure to technology fundamentally changes the way people learn. The adoption of digital technology has not been a unified phenomenon worldwide. There are a lot of opportunities where they can learn from each other, and where the generations feed each other knowledge. Collaboration is they key because digital immigrants are those who invented technologies and systems that digital natives today use fluently. It is important then, to have a variety of people with a variety of abilities and experiences. Teachers must develop lessons on horizontal solutions. Embracing all technology leads to a broader understanding of the problem. As digital natives are driven by productivity, their working style may seem competitive, so incorporating more value in the process may be a good strategy.

Using SSIS to Generate Surrogate Keys

By Richard Hiers,2017

In the following post, I’ll be explaining how to use ETL and SSIS to generate surrogate keys for dimension tables in a data warehouse. Although most dimensional data possesses a natural key or a business key in the source tables, Kimball and everyone else I’ve read strongly recommend using a surrogate key for dimensional tables. Our prospects, for example, all have a unique ProspectID, but it is recommended that the ETL system generate a unique identifying key be for each record. This surrogate key then becomes the foreign key in the Fact tables.

You can read more about the advantages of using a surrogate key in Kimball’s The Data Warehouse Toolkit, 3rd Ed. starting on page 98, Corr’s Agile Data Warehouse Design starting on page 141, and Adamson’s Star Schema on page 30 and following. But in brief, some of the key benefits of using a surrogate key are:

  1. Isolates the data warehouse from any instability in the source OLTP system keys where business keys may possibly be deleted, recycled, and modified in ways that could be detrimental to the functioning of the data warehouse.
  2. Facilitates pulling dimensional attributes from disparate source systems in which each has its own independent business keys.
  3. Supports slowly changing dimensions (SCD) where there can be multiple versions of any given entity in the dimension.

So the question becomes, how do you create and maintain these surrogate keys?

The Need: An SSIS Surrogate Key Generator

It would be possible to design the dimension table with an auto-incrementing identity key, but at least for performance reasons, that doesn’t seem to be the preferred mechanism. From Kimball starting on page 469: “For improved efficiency, consider having the ETL tool generate and maintain the surrogate keys. Avoid the temptation of concatenating the operational key of the source system and a date/time stamp.”  OK, so how do I accomplish that?

The Challenge: An SSIS Surrogate Key Generator

Given this and similar recommendations found on the web, and given the maturity of Microsoft’s SSIS ETL tools, I would expect this to be pretty straight forward. I would expect that I could just drag the Surrogate Key Creator transformation into my Data Flow and go on my merry way. But its not that straight forward. The tool doesn’t exist and even in Knight’s Professional Microsoft SQL Server 2014 Integration Services, the topic is not addressed that I can find, though I haven’t yet read it from stem to stern.

Given the complexity of some of the solutions I found on the web, it is really surprising to me that Microsoft hasn’t made this simpler to accomplish. But that seems to be where we are. With the help of some instructions on the web, I was eventually able to craft an ETL surrogate key generator independent of my data warehouse RDBMS. In the examples below I am creating a generator for my Expected Entrance Term dimension table.

Using a Script Component

I was able to create a working surrogate key (SK) generator first referring to the concepts and script found in a 2011 post from Joost van Rossum’s SSIS blog. I’ll briefly outline the solution since I found his instructions to be incomplete (He seems to have assumed a higher level of familiarity with the various components and thus omitted some details). After that, I’ll outline a simpler method which does not require the use of a script.

  1. Start by querying the destination dimension table for the last SK used. On the Control Flow tab in Microsoft Visual Studio, drag an Execute SQL Task to the canvas. Double click to edit.
  2. ResultSet set to “Single row”
  3. Define your OLE DB Connection
  4. Insert your SQL Statement (modify as needed):
FROM SemesterTerms

SSIS sql task control flow

  1. On the Result Set tab, Click the Add button to add a Variable. Leave the Variable Name as “User::Counter” and rename the Result Name to whatever you prefer.

result set variable

  1. Click OK to save your changes
  2. Add a Data Flow Task, connect the Execute SQL Task to it, and double-click to edit
  3. In the Data Flow, add an OLE DB Source to the canvas and configure it to select the records you need to populate your dimension (from the Term table in my case)
  4. Connect that to a Script Component and click to edit it (choose “Transformation” for the type when you drop it on the canvas)
  5. Go to the Inputs and Outputs tab and click Add Column and give it a Name and DataType (I used TERMKEY and four-byte signed integer [DT_I4]). Note, name it something other than “RowId”, you will need to modify the code below)
  6. Make sure the ScriptLanguage property is set to C# and click the Edit Script… button to open the script editor
  7. Replace the place-holder code with the C# code from Rossum’s blog post (step 7) referenced above. (make sure you select the C# code and not the VB.NET code). NOTE: If you did not name your new Column above “RowId” you will need to change the modify row 59 in the sample code to reflect your Column name.
  8. Save the script and close that window and then click OK to save and close the Script Component editor
  9. Add an OLE DB Destination and configure it to map the source columns plus the new key column to the appropriate destination columns. The data flow will look like this:

surrogate key generator with c# script

  1. Once you have saved this project, click the Start button to take it for a spin. On the first run you should see all of your dimension records added to the data warehouse with a sequential keys starting at 1.
  2. Start the project a second time, and you will see you will now have twice the records, but the surrogate keys for the second load will start where the previous load ended. In my case the first load ended with and SK of 280 and the second load started with 281.

Nice! Of course in a production ETL system, the second load would be only include any new rows (and assigned the next higher keys) or changes which would be handled differently based on the type of attribute that changed (are we tracking historical values, or just the current value?).

A Simpler Surrogate Key Generator

While attempting to fill in the gaps of the instructions I found for the script method above, I came across a simpler method that doesn’t require a script on Sam Vanga’s blog (scroll down to his “With ROW_NUMBER()” method). It utilizes the same Execute SQL Task, but makes utilizes the results in the initial OLE DB Source transformation tool rather than in a Script Component. Here are the steps I took:

First, perform steps 1-7 from the previous method.

  1. In the Data Flow, add an OLE DB Source to the canvas and configure it to select the records you need to populate your dimension (from the Term table in my case). However, in addition to the query you used in the previous method, you will add another column to the SELECT to make use of the variable returned from the Execute SQL Task.
,TermCalendarID AS TERM_ID
FROM SemesterTerms
  1. As in the query above, add the statement “ROW_NUMBER() OVER(ORDER BY <column_name>) + ? AS TERM_KEY”. Without the “+ ?” , this query orders the rows by the column indicated and then assigns sequential numbers, starting with 1, to each row. In my case, I had 280 rows in the result, numbered sequentially from 1-280.
  2. The “?” (question mark) is the parameter placeholder in the OLE DB Source for SSIS. After adding the question mark, click on the Parameters… button to the right of the query window. NOTE, if you don’t have a “?” in your query, you will get the message: “The SQL statement does not contain any parameters.”
  3. Select the variable you defined in step 5 (User::Counter in my case) and click OK to save.

SSIS OLE DB source editor parameters

What this does then (“ROW_NUMBER() OVER(ORDER BY <column_name>) + ? AS TERM_KEY”), is give each row a sequential row number, and then add to each the variable number which is the MAX surrogate key value of the destination dimension table. On initial load, the TERM dimension table is empty, so “0” is added to each row number. The destination ends up with 280 rows, with TERM-KEYs numbered sequentially 1-280. On the next subsequent load, each new row number will have 280 added to it. So if on the next load 5 new terms had been added to the source table, the first would be ROW_NUMBER 1, plus 280 which would result in a new surrogate key of 281, and so on.

  1. Add an OLE DB Destination and configure it to map the source columns plus the new key column to the appropriate destination columns. OK to save.

This simpler surrogate key generator populates the destination dimension just as the previous one did. I haven’t compared the performance of these two methods, but Sam Vanga claims that the ROW_NUMBER() method is twice as fast as the script option (he is using a different script than I though).


Kimball specified that the surrogate key generator should “independently generate surrogate keys for every dimension; it should be independent of database instance and able to serve distributed clients… consider having the ETL tool generate and maintain the surrogate keys” (pp. 469-470). Do these two methods meet that specification? They certainly do not rely on database triggers or on an identity column. I’m not sure what he is thinking of with “distributed clients” (Can anyone provide an insight? Please leave a comment below.) Are these methods examples of the ETL tool “maintaining” the keys? Kimball seems to be saying the generator is a thing. When I started looking around for an SSIS key generator, I was surprised to find only methods. I’ll continue to look for other options, but for now, this seems to be the best option. Let me know your thoughts and experience.


read more:

Useful Power BI Visuals Reference

How to choose the right visuals for your reports/dashboards among all these resources?
Here is a  clean reference that lists nearly all the visuals available today, categorized by their case of use. The categories we identified are:

  1. Comparison
  2. Change over time (Trend)
  3. Part-to-whole
  4. Flow
  5. Ranking
  6. Spatial
  7. Distribution
  8. Single

How to Create a Stored Procedures with Dynamic Search & Paging (Pagination)

I used this article as a reference for my very first stored procedure for my internship project so I would like to share it with you if you are looking for some ideas to start a  stored procedure with search and pagination.

How can we apply paging (or pagination) on the returned recordsets from a Stored Procedures?

Pagination is required when a lot of records are returned from an SP and the webpage becomes very heavy to load it, like more than 1k records. And it also becomes difficult for a user to handle that much records and view on a single screen.

Many webpages displaying records has First, Previous, Next & Last buttons on top and bottom of the data grid. So to make the buttons function we’ve to implement the paging functionality on SPs itself.

Let’s check with a sample code using [Person].[Contact] Table of [AdventureWorks] Database:

CREATE PROCEDURE USP_GET_Contacts_DynSearch_Paging
    -- Pagination
    @PageNbr            INT = 1,
    @PageSize           INT = 10,
    -- Optional Filters for Dynamic Search
    @ContactID          INT = NULL,
    @FirstName          NVARCHAR(50) = NULL,
    @LastName           NVARCHAR(50) = NULL,
    @EmailAddress       NVARCHAR(50) = NULL,
    @EmailPromotion     INT = NULL,
    @Phone              NVARCHAR(25) = NULL
        @lContactID         INT,
        @lFirstName         NVARCHAR(50),
        @lLastName          NVARCHAR(50),
        @lEmailAddress      NVARCHAR(50),
        @lEmailPromotion    INT,
        @lPhone             NVARCHAR(25)
        @lPageNbr   INT,
        @lPageSize  INT,
        @lFirstRec  INT,
        @lLastRec   INT,
        @lTotalRows INT
    SET @lContactID         = @ContactID
    SET @lFirstName         = LTRIM(RTRIM(@FirstName))
    SET @lLastName          = LTRIM(RTRIM(@LastName))
    SET @lEmailAddress      = LTRIM(RTRIM(@EmailAddress))
    SET @lEmailPromotion    = @EmailPromotion
    SET @lPhone             = LTRIM(RTRIM(@Phone))
    SET @lPageNbr   = @PageNbr
    SET @lPageSize  = @PageSize
    SET @lFirstRec  = ( @lPageNbr - 1 ) * @lPageSize
    SET @lLastRec   = ( @lPageNbr * @lPageSize + 1 )
    SET @lTotalRows = @lFirstRec - @lLastRec + 1
    ; WITH CTE_Results
    AS (
        FROM Person.Contact
            (@lContactID IS NULL OR ContactID = @lContactID)
        AND (@lFirstName IS NULL OR FirstName LIKE '%' + @lFirstName + '%')
        AND (@lLastName IS NULL OR LastName LIKE '%' + @lLastName + '%')
        AND (@lEmailAddress IS NULL OR EmailAddress LIKE '%' + @lEmailAddress + '%')
        AND (@lEmailPromotion IS NULL OR EmailPromotion = @lEmailPromotion)
        AND (@lPhone IS NULL OR Phone = @lPhone)
    FROM CTE_Results AS CPC
        ROWNUM > @lFirstRec
    AND ROWNUM < @lLastRec

let’s test this SP:

-- No parameters provided, fetch first 10 default records:
EXEC USP_GET_Contacts_DynSearch_Paging
-- On providing @PageSize=20, will fetch 20 records:
EXEC USP_GET_Contacts_DynSearch_Paging @PageSize=20
-- On providing @PageNbr=2, @PageSize=10, will display second page, ContactID starting from 11 to 20:
EXEC USP_GET_Contacts_DynSearch_Paging @PageNbr=2, @PageSize=10
-- On providing @PageNbr=1, @PageSize=50, @FirstName = 'Sam', it will search FurstName like Sam and will fetch first 50 records:
EXEC USP_GET_Contacts_DynSearch_Paging @PageNbr=1, @PageSize=50, @FirstName = 'Sam'
Retrived from :