Best Map Visual So Far – Power BI Integrated With ArcGIS

Power BI, integrated with ArcGIS, delivers the world’s leading visual spatial analytics. ArcGIS Maps for Power BI offers enhanced mapping and analysis capabilities, demographic data, and compelling visualizations.

Online Tutorial

Perform spatial analysis

Spatial analysis leads to greater understanding, by mapping relevant locations and spotting relationships and patterns—ultimately helping you determine the best actions to take.

With ArcGIS you can: create heat maps with a click, aggregate data into clusters, determine drive time, and reveal hidden patterns in your data.

Give context to data

Use ArcGIS to analyze data based on spatial qualities. Identify a location in relation to other critical variables.

Add demographic and reference layers, such as income, population, or weather, included free with ArcGIS.

Be right more often

ArcGIS ensures your locations are mapped correctly every time. When your maps match your data, you get accurate spatial analysis and confidence in your decisions.

 

Source
https://powerbi.microsoft.com/en-us/power-bi-esri-arcgis/

PowerBI: How to Get Address by Passing Existing Coordinates to Google Maps Reverse Geocoding API

Create a Function from an Existing Query (Address Lookup)

From the previous tutorial, we loaded the geo-location information for just one coordinate. Now we need to turn the query to a function so that we can invoke it for all coordinates we have in our dataset.

A function construction is as below:

  • Click “Advanced Editor”

Power BI Desktop Advanced Editor

  • Change the code as per the screenshot below ===> “Sample Data single load”
let
  get_geo = ( lat as text, lng as text) =>
let
 Source = Json.Document(Web.Contents
("https://maps.googleapis.com/maps/api/geocode/json?latlng="&lat&","&lng&"
&ampAIzaSyBwKQ7VSYAw4zE0zTtMCQ-k839qoeNdeso")),
 results = Source[results],
 #"Converted to Table" = Table.FromList(results, Splitter.SplitByNothing(), null, null, ExtraValues.Error),
 #"Expanded Column1" = Table.ExpandRecordColumn(#"Converted to Table", "Column1", {"formatted_address", "types"}, {"Column1.formatted_address", "Column1.types"}),
 #"Expanded Column1.types" = Table.ExpandListColumn(#"Expanded Column1", "Column1.types"),
 #"Filtered Rows" = Table.SelectRows(#"Expanded Column1.types", each ([Column1.types] = "postal_code" or [Column1.types] = "street_address")),
 #"Transposed Table" = Table.Transpose(#"Filtered Rows"),
 #"Reversed Rows" = Table.ReverseRows(#"Transposed Table"),
 #"Promoted Headers" = Table.PromoteHeaders(#"Reversed Rows", [PromoteAllScalars=true]),
 #"Changed Type" = Table.TransformColumnTypes(#"Promoted Headers",{{"street_address", type text}, {"postal_code", type text}})
in
 #"Changed Type"
in
  get_geo

Power BI Desktop Create Function

1– Function internal name

2 & 4 – Input parameters

3 & 5 – Parameters’ data types

6 – Function body

7– Output

Note: The name that will be used to revoke the function is the query name which is “fn_GeoLocation” in our sample. —> “Sample Data single load”

  • Continue modifying the codes as below then click “Done”

Power BI Desktop Function Parameters

  • What we have done above is that we replaced the static latitude and longitude with parameters. We used “&” to concatenate both sides of the source.
  • As soon as you click “Done” Power BI detects the query as a function

Power BI Desktop Invoke Function

  • To test the function just click “Invoke” and enter latitude and longitude then click OK

Power BI Desktop Invoke Function 2

Power BI Desktop Invoke Function 3

  • Delete the invoked function step

We successfully defined a query as a function so that we can invoke it anytime.

The next step is to load GPS data from CSV. I’m using my bike computer data that I exported to CSV format earlier.

Download sample CSV file here.

  • In Query Editor, click “New Source” from “Home” tab then click “CSV”

Power BI CSV

  • Browse and open the CSV file
  • Click OK

Power BI Query Editor CSV

  • Click “Add Custom Column” from “Add Column” tab and name it “Geolocation”
  • Enter the following formula to invoke the fn_GeoLocation function and click OK
  • “Sample Data single load”

Power BI Query Editor Add New Column

Note: I used “Number.ToText” function to convert latitude and longitude data type from number to text as we defined the function parameters as text.

  • You’ll get a “Information is required about data privacy” warning message. Click “Continue”

Power BI Query Editor Add New Column 2

  • Select a desired privacy level from the list then click “Save”

Power BI Privacy Levels

Note: Learn more about “Privacy levels” here.

  • Scroll right to find the new added column
  • Expand “Geolocation” column

Power BI Expand Complex Column 2

Voila! We have got all information needed

  • Click “Close & Apply” from “Home” tab from the ribbon and wait until the data loads into the model.

Power BI Query Editor Close & Apply

Note: It might take sometime to load data into the model based on your dataset size. This is because of sending coordinates to Google Maps row by row then loading the results into the model, so the process could be slow if you have a large dataset. There are also some limitations applied to the API usage.

http://biinsight.com/power-bi-and-google-maps-api-address-lookup/

INDEX MATCH in Excel

The INDEX MATCH MATCH formula is the combination of two functions in Excel: INDEX and MATCH.

=INDEX() returns the value of a cell in a table based on the column and row number.

=MATCH() returns the position of a cell in a row or column.

Combined, the two formulas can look up and return the value of a cell in a table based on vertical and horizontal criteria.  For short, this is referred to as just the Index Match function.

#1 How to use the INDEX formula

Below is a table showing people’s name, height and weight. We want to use the INDEX formula to look up Kevin’s height… here is an example of how to do it.

Follow these steps:

  1. Type “=INDEX(” and select the area of the table then add a comma
  2. Type the row number for Kevin, which is “4” and add a comma
  3. Type the column number for Height, which is “2” and close the bracket
  4. The result is “5.8”

Index formula Excel

 

#2 How to use the MATCH formula

Sticking with the same example as above, let’s use MATCH to figure out what row Kevin is in.

Follow these steps:

  1. Type “=MATCH(” and link to the cell containing “Kevin”… the name we want to look up
  2. Select the all the cells in the Name column (including the “Name” header)
  3. Type zero “0” for an exact match
  4. The result is that Kevin is in row “4”

Match formula Excel

Use MATCH again to figure out what column Height is in.

Follow these steps:

  1. Type “=MATCH(” and link to the cell containing “Height”… the criteria we want to look up
  2. Select the all the cells across the top row of the table
  3. Type zero “0” for an exact match
  4. The result is that Height is in column “2”

Match function

 

#3 How to combine INDEX and MATCH

Now we can take the two MATCH formulas and use them to replace the “4” and the “2” in the original INDEX formula. The result is an INDEX MATCH MATCH formula.

Follow these steps:

  1. Cut the MATCH formula for Kevin and replace the “4” with it
  2. Cut the MATCH formula for Height and replace the “2” with it
  3. The result is Kevin’s Height is “5.8”
  4. Congratulations, you now have a dynamic INDEX MATCH formula!

Index Match Match in Excel

 

https://corporatefinanceinstitute.com/resources/excel/study/index-match-formula-excel/

Part 2:How to Set Up a Virtual Machine

Setting Up a Virtual Machine

Once you’ve decided on a VM app and gotten it installed, setting up a VM is actually pretty easy. We’re going to run through the basic process in VirtualBox, but most apps handle creating a VM the same way.

Open up your VM app and click the button to create a new virtual machine.

You’ll be guided through the process by a wizard that first asks which OS you’ll be installing. If you type the name of the OS in the “Name” box, the app will most likely automatically select the type and version for the OS. If it doesn’t—or it guesses wrong—select those items yourself from the dropdown menus. When you’re done, click “Next.”

Based on the OS you plan to install, the wizard will preselect some default settings for you, but you can change them over the screens that follow. You’ll be asked how much memory to allocate to the VM. If you want something other than the default, select it here. Otherwise, just click “Next.” And don’t worry, you’ll be able to change this value later if you need to.

The wizard will also create the virtual hard disk file to be used by the VM. Unless you already have a virtual hard disk file you want to use, just select the option to create a new one.

You’ll also be asked whether to create a dynamically allocated or fixed size disk. With a dynamically allocated disk, you’ll set a maximum disk size, but the file will only grow to that size as it needs to. With a fixed size disk, you’ll also set a size, but the file created will be that large from its creation.

We recommend creating fixed size disks because, while they eat up a little more disk space, they also perform better—making your VM feel a bit more responsive. Plus, you’ll know how much disk space you’ve used and won’t get surprised when your VM files start growing.

You’ll then be able to set the size of the virtual disk. You’re free to go with the default setting or change the size to suit your needs. Once you click “Create,” the virtual hard disk is created.

After that, you’re dumped back into the main VM app window, where your new VM should show up. Make sure the installation media you need is available to the machine—usually this involves pointing to an ISO file or real disc through the VM’s settings. You can run your new VM by selecting it and hitting “Start.”


Of course, we’ve just touched on the basics of using VMs here. If you’re interested in more reading, check out some of our other guides:

https://www.howtogeek.com/196060/beginner-geek-how-to-create-and-use-virtual-machines/

Part 1:How to Create and Use Virtual Machines

Virtual machines allow you to run an operating system in an app window on your desktop that behaves like a full, separate computer. You can use them play around with different operating systems, run software your main operating system can’t, and try out apps in a safe, sandboxed environment.

There are several good free virtual machine (VM) apps out there, which makes setting up a virtual machine something anybody can do. You’ll need to install a VM app, and have access to installation media for the operating system you want to install.

What’s a Virtual Machine?

A virtual machine app creates a virtualized environment—called, simply enough, a virtual machine—that behaves like a separate computer system, complete with virtual hardware devices. The VM runs as a process in a window on your current operating system. You can boot an operating system installer disc (or live CD) inside the virtual machine, and the operating system will be “tricked” into thinking it’s running on a real computer. It will install and run just as it would on a real, physical machine. Whenever you want to use the operating system, you can open the virtual machine program and use it in a window on your current desktop.

In the VM world, the operating system actually running on your computer is called the host and any operating systems running inside VMs are called guests. It helps keep things from getting too confusing.

In a particular VM, the guest OS is stored on a virtual hard drive—a big, multi-gigabyte file stored on your real hard drive. The VM app presents this file the guest OS as a real hard drive. This means you won’t have to mess around with partitioning or doing anything else complicated with your real hard drive.

Virtualization does add some overhead, so don’t expect them to be as fast as if you had installed the operating system on real hardware. Demanding games or other apps that require serious graphics and CPU power don’t really do so well, so virtual machines aren’t the ideal way to play Windows PC games on Linux or Mac OS X—at least, not unless those games are much older or aren’t graphically demanding.

The limit to how many VMs you can have are really just limited by the amount of hard drive space. Here’s a peek at some of the VMs we use when testing things out while writing articles. As you can see, we’ve got full VMs with several versions of Windows and Ubuntu installed.

You can also run multiple VMs at the same time, but you’ll find yourself somewhat limited by your system resources. Each VM eats up some CPU time, RAM, and other resources.

Why You’d Want to Create a Virtual Machine

Aside from being good geeky fun to play around with, VMs offer a number of serious uses. They allow you to experiment with another OS without having to install it on your physical hardware. For example, they are a great way to mess around with Linux—or a new Linux distribution—and see if it feels right for you. When you’re done playing with an OS, you can just delete the VM.

VMs also provide a way to run another OS’ software. For example, as a Linux or Mac user, you could install Windows in a VM to run Windows apps you might not otherwise have access to. If you want to run a later version of Windows—like Windows 10—but have older apps that only run on XP, you could install Windows XP into a VM.

Another advantage VMs provide is that they are “sandboxed” from the rest of your system. Software inside a VM can’t escape the VM to tamper with the rest of your system. This makes VMs a safe place to test apps—or websites—you don’t trust and see what they do.

For example, when the “Hi, we’re from Windows” scammers came calling, we ran their software in a VM to see what they would actually do—the VM prevented the scammers from accessing our computer’s real operating system and files.

Sandboxing also allows you to run insecure OSes more safely. If you still need Windows XP for older apps, you could run it in a VM where at least the harm of running an old, unsupported OS is mitigated.

Virtual Machine Apps

There are several different virtual machine programs you can choose from:

  • VirtualBox: (Windows, Linux, Mac OS X): VirtualBox is very popular because it’s open-source and completely free. There’s no paid version of VirtualBox, so you don’t have to deal with the usual “upgrade to get more features” upsells and nags. VirtualBox works very well, particularly on Windows and Linux where there’s less competition, making it a good place to start with VMs.
  • VMware Player: (Windows, Linux): VMware has their own line of virtual machine programs. You can use VMware Player on Windows or Linux as a free, basic virtual machine tool. More advanced features—many of which are found in VirtualBox for free—require upgrading to the paid VMware Workstation program. We recommend starting out with VirtualBox, but if it doesn’t work properly you may want to try VMware Player.
  • VMware Fusion: (Mac OS X): Mac users must buy VMware Fusion to use a VMware product, since the free VMware Player isn’t available on a Mac. However, VMware Fusion is more polished.
  • Parallels Desktop: (Mac OS X): Macs also have Parallels Desktop available. Both Parallels Desktop and VMware Fusion for Mac are more polished than the virtual machine programs on other platforms, since they’re marketed to average Mac users who might want to run Windows software.

While VirtualBox works very well on Windows and Linux, Mac users may want to buy a more polished, integrated Parallels Desktop or VMware Fusion program. Windows and Linux tools like VirtualBox and VMware Player tend to be targeted to a geekier audience.

There are many more VM options, of course. Linux includes KVM, an integrated virtualization solution. Professional and Enterprise version of Windows 8 and 10—but not Windows 7—include Microsoft’s Hyper-V, another integrated virtual machine solution. These solutions can work well, but they don’t have the most user-friendly interfaces.

https://www.howtogeek.com/196060/beginner-geek-how-to-create-and-use-virtual-machines/

Terminology: High-Level and Low-Level

High-level and low-level, as technical terms, are used to classify, describe and point to specific goals of a systematic operation; and are applied in a wide range of contexts, such as, for instance, in domains as widely varied as computer science and business administration.

High-level describe those operations that are more abstract in nature; wherein the overall goals and systemic features are typically more concerned with the wider, macro system as a whole.

Low-level describes more specific individual components of a systematic operation, focusing on the details of rudimentary micro functions rather than macro, complex processes. Low-level classification is typically more concerned with individual components within the system and how they operate.

In Computer Science, software is typically divided into two types: high-level end-user applications software (such as word processors, databases, video games, etc.), and low-level systems software (such as operation systems, hardware drivers, etc.).
As such, high-level applications typically rely on low-level applications to function.
In terms of programming, a high-level programming language is one which has a relatively high level of abstraction and manipulates conceptual functions in a structured manner.
low-level programming language is one like assembly language that contains rudimentary microprocessor commands.

 

https://en.wikipedia.org/wiki/High-_and_low-level

Useful Power BI Visuals Reference

How to choose the right visuals for your reports/dashboards among all these resources?
Here is a  clean reference that lists nearly all the visuals available today, categorized by their case of use. The categories we identified are:

  1. Comparison
  2. Change over time (Trend)
  3. Part-to-whole
  4. Flow
  5. Ranking
  6. Spatial
  7. Distribution
  8. Single

SSAS 01 : SQL Server Analysis Services Tutorial Scenario

 


 

This tutorial is based on Adventure Works Cycles, a fictitious company. Adventure Works Cycles is a large, multinational manufacturing company that produces and distributes metal and composite bicycles to commercial markets in North America, Europe, and Asia. The headquarters for Adventure Works Cycles is Bothell, Washington, where the company employs 500 workers. Additionally, Adventure Works Cycles employs several regional sales teams throughout its market base.

In recent years, Adventure Works Cycles bought a small manufacturing plant, Importadores Neptuno, which is located in Mexico. Importadores Neptuno manufactures several critical subcomponents for the Adventure Works Cycles product line. These subcomponents are shipped to the Bothell location for final product assembly. In 2005, Importadores Neptuno became the sole manufacturer and distributor of the touring bicycle product group.

Following a successful fiscal year, Adventure Works Cycles now wants to broaden its market share by targeting advertising to its best customers, extending product availability through an external Web site, and reducing the cost of sales by reducing production costs.

Current Analysis Environment

To support the data analysis needs of the sales and marketing teams and of senior management, the company currently takes transactional data from the AdventureWorks2012 database, and non-transactional information such as sales quotas from spreadsheets, and consolidates this information into the AdventureWorksDW2012 relational data warehouse. However, the relational data warehouse presents the following challenges:

  • Reports are static. Users have no way to interactively explore the data in the reports to obtain more detailed information, such as they could do with a Microsoft Office Excel pivot table. Although the existing set of predefined reports is sufficient for many users, more advanced users need direct query access to the database for interactive queries and specialized reports. However, because of the complexity of the AdventureWorksDW2012 database, too much time is needed for such users to master how to create effective queries.
  • Query performance is widely variable. For example, some queries return results very quickly, in only a few seconds, while other queries take several minutes to return.
  • Aggregate tables are difficult to manage. In an attempt to improve query response times, the data warehouse team at Adventure Works built several aggregate tables in the AdventureWorksDW2012 database. For example, they built a table that summarizes sales by month. However, while these aggregate tables greatly improve query performance, the infrastructure that they built to maintain the tables over time is fragile and prone to errors.
  • Complex calculation logic is buried in report definitions and is difficult to share between reports. Because this business logic is generated separately for each report, summary information sometimes is different between reports. Therefore, management has limited confidence in the data warehouse reports.
  • Users in different business units are interested in different views of the data. Each group is distracted and confused by data elements that are irrelevant to them.
  • Calculation logic is particularly challenging for users who need specialized reports. Because such users must define the calculation logic separately for each report, there is no centralized control over how the calculation logic is defined. For example, some users know that they should use basic statistical techniques such as moving averages, but they do not know how to construct such calculations and so do not use these techniques.
  • It is difficult to combine related sets of information. Specialized queries that combine two sets of related information, such as sales and sales quotas, are difficult for business users to construct. Such queries overwhelmed the database, so the company requires that users request cross-subject-area sets of data from the data warehouse team. As a result, only a handful of predefined reports have been defined that combine data from multiple subject areas. Additionally, users are reluctant to try to modify these reports because of their complexity.
  • Reports are focused primarily on business information in the United States. Users in the non-U.S. subsidiaries are very dissatisfied with this focus, and want to be able to view reports in different currencies and different languages.
  • Information is difficult to audit. The Finance department currently uses the AdventureWorksDW2012 database only as a source of data from which to query in bulk. They then download the data into individual spreadsheets, and spend significant time preparing the data and manipulating the spreadsheets. The corporate financial reports are therefore difficult to prepare, audit, and manage across the company.

The Solution

The data warehouse team recently performed a design review of the current analysis system. The review included a gap analysis of current issues and future demands. The data warehouse team determined that the AdventureWorksDW2012 database is a well-designed dimensional database with conformed dimensions and surrogate keys. Conformed dimensions enable a dimension to be used in multiple data marts, such as a time dimension or a product dimension. Surrogate keys are artificial keys that link dimension and fact tables and that are used to ensure uniqueness and to improve performance. Moreover, the data warehouse team determined that there currently are no significant problems with the loading and management of the base tables in the AdventureWorksDW2012 database. The team has therefore decided to use Microsoft Analysis Services to accomplish the following:

  • Provide unified data access through a common metadata layer for analytical analysis and reporting.
  • Simplify users’ view of data, speeding the development of both interactive and predefined queries and predefined reports.
  • Correctly construct queries that combine data from multiple subject areas.
  • Manage aggregates.
  • Store and reuse complex calculations.
  • Present a localized experience to business users outside the United States.

The lessons in the Analysis Services tutorial provide guidance in building a cube database that meets all of these goals. To get started, continue to the first lesson: SSAS 02: Create a New Tabular Model Project.


Analysis Services is an analytical data engine used in decision support and business analytics. It provides enterprise-grade semantic data models for business reports and client applications such as Power BI, Excel, Reporting Services reports, and other data visualization tools.

A typical workflow includes creating a tabular or multidimensional data model project in Visual Studio, deploying the model as a database to a server instance, setting up recurring data processing, and assigning permissions to allow data access by end-users. When it’s ready to go, your semantic data model can be accessed by client applications supporting Analysis Services as a data source.

Analysis Services is available in two different platforms:

  1. Azure Analysis Services – Supports tabular models at the 1200 and higher compatibility levels. DirectQuery, partitions, row-level security, bi-directional relationships, and translations are all supported. To learn more, see Azure Analysis Services.
  2. SQL Server Analysis Services – Supports tabular models at all compatibility levels, multidimensional models, data mining, and Power Pivot for SharePoint.

The series of articles are mostly for SQL Server Analysis Services. However, at least for tabular models, how you create and deploy your tabular model projects is much the same, regardless of the platform you’re using. Check out these sections to learn more:

In general, Azure Analysis Services documentation is included with Azure documentation. If you’re interested in having your tabular models in the cloud, it’s best to start there.