1. 1. Prepare the Data (20-25%)
1.1. 1.1. Get data from different sources
1.1.1. 1.1.1. identify and connect to a data source
1.1.2. 1.1.2. change data source settings
1.1.3. 1.1.3. select a shared dataset or create a local dataset
1.1.4. 1.1.4. select a storage mode
1.1.4.1. Import
1.1.4.1.1. The Import mode allows you to create a local Power BI copy of your datasets from your data source. You can use all Power BI service features with this storage mode, including Q&A and Quick Insights. However, data refreshes must be done manually. Import mode is the default for creating new Power BI reports.
1.1.4.2. DirectQuery
1.1.4.2.1. The DirectQuery option is useful when you do not want to save local copies of your data because your data will not be cached. Instead, you can query the specific tables that you will need by using native Power BI queries, and the required data will be retrieved from the underlying data source. Essentially, you are creating a direct connection to the data source. Using this model ensures that you are always viewing the most up-to-date data, and that all security requirements are satisfied. Additionally, this mode is suited for when you have large datasets to pull data from. Instead of slowing down performance by having to load large amounts of data into Power BI, you can use DirectQuery to create a connection to the source, solving data latency issues as well.
1.1.4.3. Dual
1.1.4.3.1. In Dual mode, you can identify some data to be directly imported and other data that must be queried. Any table that is brought in to your report is a product of both Import and DirectQuery modes. Using the Dual mode allows Power BI to choose the most efficient form of data retrieval.
1.1.5. 1.1.5. choose an appropriate query type
1.1.6. 1.1.6. identify query performance issues
1.1.7. 1.1.7. use the Common Data Service (CDS)
1.1.8. 1.1.8. use parameters
1.2. 1.2. Profile the data
1.2.1. 1.2.1. identify data anomalies
1.2.2. 1.2.2. examine data structures
1.2.3. 1.2.3. interrogate comumn properties
1.2.4. 1.2.4. interrogate data statistics
1.3. 1.3. Clean, transform and load the data
1.3.1. 1.3.1. resolve inconsistencies, unexpected or null values, and data quality issues
1.3.2. 1.3.2. apply user-friendly value replacements
1.3.3. 1.3.3. identify and create appropriate keys for joins
1.3.4. 1.3.4. evaluate and transform column data types
1.3.5. 1.3.5. apply data shape transfotmations to table structures
1.3.6. 1.3.6. combine queries
1.3.7. 1.3.7. apply user-friendly naming conventions to columns and queries
1.3.8. 1.3.8. leverage Advanced Editor to modify Power Query M code
1.3.9. 1.3.9. configure data loading
1.3.10. 1.3.10. resolve data import errors
2. 2. Model the Data (25-30%)
2.1. 2.1. Design a data model
2.1.1. 2.1.1. define the tables
2.1.2. 2.1.2. configure table and column properties
2.1.3. 2.1.3. define quick measures
2.1.4. 2.1.4. flatten out a parent-child hierarchy
2.1.5. 2.1.5. define role-playing dimensions
2.1.6. 2.1.6. define a relationship's cardinality and cross-filter directions
2.1.7. 2.1.7. design the data model to meet performance requirements
2.1.8. 2.1.8. resolve many-to-many-relationships
2.1.9. 2.1.9. create a common date table
2.1.10. 2.1.10. define the appropriate level of granularity
2.2. 2.2. Develop a data model
2.2.1. 2.2.1. apply cross-filter direction and security filtering
2.2.2. 2.2.2. create calculated tables
2.2.3. 2.2.3. create hierarchies
2.2.4. 2.3.4. create calculated columns
2.2.5. 2.3.5. implement row-level security roles
2.2.6. 2.3.6. set up the Q&A feature
2.3. 2.3. Create measures by using DAX
2.3.1. 2.3.1. use DAX to build complex measures
2.3.2. 2.3.2. use CALCULATE to manipulate filters
2.3.3. 2.3.3. implement Time Intelligence using DAX
2.3.4. 2.3.4. replace numeric columns with measures
2.3.5. 2.3.5. use basic statistical functions to enhance data
2.3.6. 2.3.6. create semi-additive measures
2.4. 2.4. Optimize model performance
2.4.1. 2.4.1. remove unnecessary rows and columns
2.4.2. 2.4.2. identify poorly performing measures, relationships and visuals
2.4.3. 2.4.3. improve cardinality levels by changing data types
2.4.4. 2.4.4. improve cardinality levels through summarization
2.4.5. 2.4.5. create and manage aggregations
3. 3. Visualize the Data (25-30%)
3.1. 3.1. Create reports
3.1.1. 3.1.1. add visualization items to reports
3.1.2. 3.1.2. choose an appropriate visualizations type
3.1.3. 3.1.3. format and configure visualizations
3.1.4. 3.1.4. import a custom visual
3.1.5. 3.1.5. configure formational formating
3.1.6. 3.1.6. apply slicing and filtering
3.1.7. 3.1.7. add and R or Python visual
3.1.8. 3.1.8. configure the report page
3.1.9. 3.1.9. design and configure for accessibility
3.1.10. 3.1.10. configure automatic page refresh
3.2. 3.2. Create dashboards
3.2.1. 3.2.1. set mobile view
3.2.2. 3.2.2. manage tiles on a dashboard
3.2.3. 3.2.3. configure data alerts
3.2.4. 3.2.4. use the Q&A feature
3.2.5. 3.2.5. add a dashboard theme
3.2.6. 3.2.6. pin a live report page to a dashboard
3.2.7. 3.2.7. configure data classification
3.3. 3.3. Enrich reports for usability
3.3.1. 3.3.1. configure bookmarks
3.3.2. 3.3.2. create custom tooltips
3.3.3. 3.3.3. edit and configure interactions between visuals
3.3.4. 3.3.4. configure navigation for a report
3.3.5. 3.3.5. apply sorting
3.3.6. 3.3.6. configure Sync Slicers
3.3.7. 3.3.7. use the selection pane
3.3.8. 3.3.8. use drillthrough and cross filter
3.3.9. 3.3.9. drilldown into data using interactive visuals
3.3.10. 3.3.10. export report data
3.3.11. 3.3.11. design report for mobile devices
4. 4. Analyze the Data (10-15%)
4.1. 4.1. Enhance reports to expose insights
4.1.1. 4.1.1. apply conditional formatting
4.1.2. 4.1.2. apply slicers and filters
4.1.3. 4.1.3. perform top N analysis
4.1.4. 4.1.4. explore statistical summaty
4.1.5. 4.1.5. use the Q&A visual
4.1.6. 4.1.6. add a Quick Insights result to a report
4.1.7. 4.1.7. create reference lines by using Analytics pane
4.1.8. 4.1.8. use the Play Axis feature of a visualization
4.2. 4.2. Perform advanced analysis
4.2.1. 4.2.1. identify outliers
4.2.2. 4.2.2. condict Time Series analysis
4.2.3. 4.2.3. use groupings and binnings
4.2.4. 4.2.4. use the Key Influencers to explore dimensional variances
4.2.5. 4.2.5. use the decomposition tree visual to break down a measure
4.2.6. 4.2.6. apply AI Insights
5. 5. Deploy and Mantain Deriverables (10-15%)
5.1. 5.1. Manage datasets
5.1.1. 5.1.1. configure a dataset scheduled refresh
5.1.2. 5.1.2. configure row-level security group membership
5.1.3. 5.1.3. providing access to datasets
5.1.4. 5.1.4. configure incremental refresh settings
5.1.5. 5.1.5. promote or certify a dataset
5.2. 5.2. Create and manage workspaces
5.2.1. 5.2.1. create and configure a workspace
5.2.2. 5.2.2. recommend a development lifecycle strategy
5.2.3. 5.2.3. assign workspace roles
5.2.4. 5.2.4. configure and update a workspace app
5.2.5. 5.2.5. publish, import or update assets in a workspace
5.2.6. 5.2.6. apply sensivity labels to workspace content