Blog

Navigator Menu

Upgrade to Navigator App – Custom Menu Based on Shared Apps for User

Prior Post

II recently posted abut a Power App that I created that gave users an easy way to launch other apps they needed without having to “know” the app launch details. See https://ilovepowerapps.wordpress.com/2022/07/27/build-a-navigator-app/.

There were apps that had limited sharing though, and users were getting frustrated when they clicked on a navigation button to then find out they didn’t have shared permissions for that app. I deceided to modify the Navigator app so that the choices showing were only those the logged on user had shared permissions to access.

First Stab at a Solution

The first step in an improved solution was to create a SharePoint list with a record for each button (corresponding to an app). It has the following columns:

  • Title – Text description describig the button
  • AppID – Text containing the app id (from app details page)
  • ButtonLabel – Text value for button display
  • ButtonID – Integer from 1 to number of buttons
  • UserValid – Text field with default value of “No”

I then created a flow that the app called. This flow took two input parmeters – user dsplay name and app id- and returned a text value of “Yes” if the user had shared access to the app and “No” if the user did not. The key action in this flow is

Get App Role Assignments as Admin

The action returns a list of users for the app id who have shared permissions, so in the flow, I looked at each shared user to match the user display name passed to it. This flow was done for each row in the list described above, patching the collection UserValid colum with the returned value from the flow.

I then created a gallery with two items: a label with the title field and a button with the OnSelect property set to the action described in my original post, the Text value for the button using the ButtonLabel value. I filtered the gallery by using UserValid = “Yes”.

This worked great. Only the shared apps appeared. Then other users tried it and it failed to work: the Get App Role Assignments as Admin works only if the executing user is a site administrator. Just a few users are.

A Different Approach

Because different users might be using the Navigator at the same time, I had to create a new SharePoint list that would have a record for each user and columns for each app. I didn’t want to try and manually maintin this list, so I created a flow to combine creating the list of users and getting shared app permissions. First the new list:

  • Title – Text field containing user display name
  • <app> – each app has a text column with default value of “No”

New Flow Scheduled to Run Daily

The first thing this new flow does is to check and see if the new list above – let’s call it User Apps – is empty or not. If not empty, it then deletes all the records in the list to have a clean starting point.

The next action is

Search for users (V2)

This returns a list of users but included are continers other than actual users. Fortunately, all users have a value in Department so in the Apply to Each actions on the return value, a condition to ignore users with no Department was easy, and your situation might require other filtering mechanism.

For each user, the flow creates a new item in the User Apps list. Recall that the default value for each app column is “No”.

The flow then gets the SharePoint list of Navigator Buttons and foe each value does the following actions:

  1. The action Get App Role Assignments as Admin using appID and the user displayy name from the User App list item.
  2. Get Items from User App where display name from Get App Role Assignments as Admin action equals Title in that list. Only one item should be returned but still requirs Apply to Each.
  3. Switch action based on ButtonID. Each case then executes a Patch() on the item in the User App item and app colum with the “Yes” value returned from the Get App Role Assignments as Admin action.

The end result for the User App list is an item for each user with columns for each app in the navigator having values of “Yes” if the user has shared access or “No” if not.

In the Navigator app, the OnStart commands are

Set(varUser, USER().FullName);
ClearCollect(colNavigate,
ForAll(‘Navigator Fields’,
{
Title: Title,
ButtonID:ButtonID,
‘Button Text’:’Button Text’,
‘App ID’: ‘App ID’,
UserValid: “No”
})
);
ClearCollect(colUserApp, Filter(‘User App Shares’, Title=varUser));
ForAll(colUserApp,
Patch(colNavigate,LookUp(colNavigate,Title=”<appName>), {UserValid:'”<app valid’});
…..<repeat Patch() for each app>
);

Then set the gallery’s Item to colNavigate, UserValid=”Y es” and just the permitted apps appear.

The Flow Actions

Flow to build User App List

I Hope This Deights You

Please leave any comments or questions and feel free to repost links to this blog entry. Thanks for reading!

Advertisement
PowerApps, Timer control

Build a Navigator App

What is a Navigator App?

I built a number of Power Apps that users often did not recal. Since they were also designed to run in a particular order, it was equally common that users might also skip one or more steps.

I created tabs in Teams to not only specify the apps but also showed the order. That left something to be desired:

  1. Too many tabs so that they all didn’t show at once
  2. Not much in the way of reminding them the run orer.

My Navigator Solution

Great, I thought, I will just create a Power App that lets users run apps by clicking on buttons but will show the order they should be run. Sounded pretty straightforward.

But then a few isses developed.

How do I get the app to run another app but return to the navigator app when exiting the referenced app? Turns out the solution was a series of things.

Steps to solve the issues

The place to start is the OnSelect propery of each button to run another app. The Power App function to use is Launch(). I used Launch() with three parameters:

  • The link to the target app;
  • The parameter to ue;
  • The destination tab/windown control

The link parameter could be one of tw forms: the https for the app or a specific app reference. Both of these can be found on the details page of the app as shown in this sample

I used the app ID and it was something like this:

Launch(“/providers/Microsoft.PowerApps/apps/<appID>”

Of course, substitue the actual app ID and make sure to use the exact string before it all in double qutes.

The next object in the expression is for parameters. I simply needed just a single paratemter that I could test in the app to determine if it had been lauched by the Navigator app or the user accessed it directly. I used the same parameter for all apps like this

{q: “Nav”}

Later I will ecplain why I pass a parameter and how to use it in the target apps.

The third and last value I set to LaunchTarget.Replace. This tells Navigator to use the same window to run the target app. The full expression is

Launch(“/providers/Microsoft.PowerApps/apps/20528f0f-c9fc-40e5-861a-<more…>”,{q: “Pick”}, LaunchTarget.Replace)

I then added a Exit(false) expreession.

Of Course This Didn’t Work Properly

I say of course but rest assured I thought it would. I wasn’t able to return to the Navgator app and see it reappear in the same browser tab.

After some testing, I came up with the following solution.

On a blank new sceen, I added a timer control. I set Duration to 3000 or 3 seconds. I set OnTimerEnd to Exit(false). I set Startto varTimer. Then I modified the OnSelect to be

Launch(“/providers/Microsoft.PowerApps/apps/20528f0f-c9fc-40e5-861a-<more…<“,{q: “Pick”}, LaunchTarget.Replace); Set(varTimer, true)

Modiying the Target Apps

Each target app required two changes:

  • In the App->Onstasrt, insert Set(varParam, Param(“q”))
  • In the app Exit or Quit button, insert

// If started from launcher, return to it else just exit
If(IsBlank(varParam),
Exit(),
Launch(“https://apps.powerapps.com/play/d16bb196-d5c5-4093-8bc8-9767408a4637?tenantId=b6904651<remainder…>&#8221;, {q:1}, LaunchTarget.Replace))

In other words, if no parameter were passed, the user executed the app directly, otherwise invokded by the Navigator. I used the url in this example but it could also have been the app ID format.

Works Great!

Now it all works as intended. Click on a button for a target app, the Navigator goes away and the target app appears in the same browser tab. Exit the app and the Navigator reappears in the same tab. Pretty fast aswell.

Except that….

Maybe yor recall Gilda Radner as Roseanne Roseannadanna on SNL. As she said, “It’s always something.” Imagine my surprise to hear from users that clicking on any of the app launch buttons did nothing at all. I tried debugging but it always seemed to work just fine. Then I found users having this problem were running Navigatorapp inside Teams since I had added that tab. And it’s true that nothing happens when it runs in Teams.

That seems to make sense.

Lists larger than 2000, Microsoft Flow, sharepoint lists

Power App Collections Larger than 2000

When a data source has more than 2000 records

Power Apps does not allow retrieving more than 2000 records from a data source using ClearCollect(). But that does not mean that a collection cannot contain more than 200 items; you just need a few steps, and clever tricks, to get a larger collection.

Depending on your data source and the colunmns available to you, there are different ways of getting items in groups of 2000, but the objective is to have some column that can be used to group the items into definable 2000-limit chucks. Moreover, if your datasource is SharePoint there are other limitations.

The default number of rows to retrieve is 500. Click on Settings for the app and increase this count to 2000. I always do this when creating an app as a first step.

A Wrong Approach for SharePoint

In my environment, I have identical data in both SharePoint and SQL. I have flows that trigger when new items are added to SQL tables which then create new item,s in corresponding SharePoint lists. The primary reason for this approach is that premium licenses for the majority of Power Apps users is not required if only SharePoint lists are used, but premium licenses would be required if SQL tables were directly used.

The way I planned on getting collections in groups of 2000 was to filter by ID. I was able to have lists where ID values started at 1 and increased by 1. Easy, I thought, so I could do the following:

ClearCollect(col1, Filter(SPList, ID < 2001));
ClearCollect(col2, Filter(SPList, ID > 2000 && ID < 4001)):
ClearCollect((col3, Filter(SPList, ID > 4000));
ClearCollect(colBig, col1, col2, col3)

The number of records in my datasource was around 5500, so three collections would suffice. If the count is somewhat stable, you can use fixed collections but if it could be quite variable, this could be done dynamcally, which I will leave for a later post.

Imagine my suprise, then, when the above actions executed, col1 had 2000 records but col2 and col3 were empty! What the heck?

Finding and Fixing the Issue

I created a test Power Apps that used the SQL table instead of the SharePoint list. The same actions worked just fine for the SQL table. That pointed to something about SharePoint. With a bit of digging, and luck, I found using ID doesn’t work for SharePoint lists. Easy, I thought, as I added a computed coulmn set to =ID.

You might not be surprised that it doesn’t work for computed colmns, either. I removed that column and added a number column. But I had to get the value of each ID into that column. Power Automate to the rescue using a manul flow.

The steps are easy:

  1. Get items from SharePoint list. Note: be sure and change settings on this step to use row count larger than the number of rows. Click on the … on the step and choose Settings.
  2. Apply to each for above step.
  3. Update item and set the new number column to ID.

Ths could take a bit of time, about 15 minutes for my list. As an advisory note, on the run page, it reported the task timed out. But returning to see Run History, it showed still running, so that should be your guide.

Finally

I changed my steps to the following:

Concurrent(
ClearCollect(col1, Filter(SPList, IDNum < 2001));
ClearCollect(col2, Filter(SPList, IDNum > 2000 && IDNum < 4001)):
ClearCollect((col3, Filter(SPList, IDNum > 4000));

);
ClearCollect(colBig, col1, col2, col3)

The Concurrent() allows the ClearCollect() for the chucks to run concurrently which speeds up execution. If you omit it, everything stilll works.

My editorial comment is that Microsoft should allow getting more than 2000 records from SharePoint. If they would require a premium license to use a larger value, I would be okay with that. In the meantime, I hope this helps you.

SQL, SQL connector

Recovering flows and apps after accidentally deleting SQL connector

Imagine my surprise…

I made a huge mistake. I originally had a Power App that use SQL, but I later changed it to use SharePoint lists instead. The license costs for accessing SQL were more than they wanted to spend. I set up a flow that would trigger when a new record was added to some SQL tables, and those flows copied the record to a SharePoint list.

In spite of the fact no SQL access was used in the Power App, the connection was still there. One day, several of the users could no longer run the app. Easy peasy, I thought, I will just delete the connection.

Not only did it remove it from that app, but ALL the apps using SQL. As an added bonus, it also removed it from all the flows.

This should be easy to fix, I thought

Not really. Try as I might I could not get the SQL connector to connect again, and that included adding a new SQL connector. I kept getting authentication errors, and more. I finally submitted a support request. The quick response back was that they had no idea what to do but would subsequently be in touch. I also knew that since this happened on Friday, expectations were that I would have it fixed by Monday.

And so I did. Here’s how.

SQL is an Azure instance

I went to the Azure portal and drilled down to the database instance, a level below the server. On the left hand navigation, Power BI, Power, Power Apps, and Power Automate are selectable. I chose Power Apps.

On the next page, there is a “Get started” button under the section to create a Power App automatically. I clicked on that, chose a random table from the database, and a few moments later I had my app.

What I also had was a new connector to the SQL database.

Fixing Power Automate flows

This tured out to be very straightforward. Here’s what I did:

  1. Edited the flow.
  2. Each step that used the failing connection showed an error.
  3. I clicked on that step to open it.
  4. I was prompted to choose a connection and chose the new one created above.

As if by magic, all of the other values and parametes for the step were still in place. After correcting all the error steps, I save the flow and found they ran perfecty.

Fixing Power Appps

I figured ot a great way to do this to minimize changes you will have to do.

  1. Edit the app.
  2. Click on the data icon on the left.
  3. You will see any broked SQL table connections that show as not connected.
  4. Make a note, carefully, of those tables.
  5. One by one, delete those connectors.
  6. Now click on add data, choose SQL, and then the new SQL connector istance from above.
  7. Select all of the tables you noted in step 4 That’s wjy you have to make careful notess.
  8. When those tables are added, your Power App should be restored wit no additional effort.
  9. Safe and publish it.

If You Don’t Follow these Steps

You may be tempted to not delete the broken connections first, If you do, the new ones will come in as <tablename>_1 and you will have to edit each and very instance of the table name throughout your app. Carful notes is far easier.

I hope this saves you some anst in case this happens to you.

AddColumns, Filterint, List, PowerApps, SharePoint, SortByColumns, Uncategorized

Using SortByColumns with AddColumn and multiple sort columns

First, Workaround for People Column

What I needed to do was to create a collection to display in a gallery items from a SharePoint list. I wanted the items in the collection to show by the displayname value of a person field in the list and then by a sequence number so that all items for the same person would be grouped together but then in sequence number order.

You cannot, however, use SortByCollumns on a person field. Instead, you have to use AddColmns to create a a searchable column from data in the person field.

The second thing to put together is to sort on multiple columns, in my case the new column I created with AddColumns and another field in my SharePoint list. The fact that you can construct this incorrectly does not produce any errors when comosing the funtction nor when it runs. You simply don’t get the correct results. It’s an easy fix that I will illustrate.

How to AddColumns from a People column

The function to use will look like this:

AddColumns(<SharePointlist>,”<NewColumnName”, <peoplepicker>.DisplayName)

It’ important to note several things

  • Make sure the <NewColumnName> is enclosed in double quotes “
  • If there are spaces in <peoplepicker> name, use a single quote ‘ to begin the column name and Power Apps will find the correct name
  • DisplayName gets the display name for the person, but other values can be chosen as well such as email or department

You May Also Need to Filter the Selection

Just preceed the AddCoums expression inside the Filter expression like this:

Filter(AddColumns(<SharePointList>,”<NewColumnName>”, ‘<peoplepicker>.DisplayName),
<col1>=<value1> && (<col2> || <value2>) && !IsBlank(<NewColumnName>)

)

Certainly your details will vary. But the example does illustrate that you can use the new column created by AddColumns in the Filter conditions along with any other valid filter conditions.

Now Let’s Sort the Resulting Items

That means wrapping the expression with a SortByColumns function:

SortByColumns(Filter(AddColumns(<SharePointList>,”<NewColumnName>”, ‘<peoplepicker>.DisplayName),
<col1>=<value1> && (<col2> || <value2>) && !IsBlank(<NewColumnName>),
“<NewColumnName>”, “<anothersortcolumn>”)

In words, here’s what this means:

  1. Note that the sort column names are enclosed in double quotes “;
  2. Add a new column from the existing people picker column display name value. Note that this is not going to affect your SharePoint list (or any other data source); it just applies to the gathering of these items.
  3. Select only items from the source that match the critria in the Filter expressions.
  4. Sort the items by the display name values and wiythin each qunique value then by the additional column.
  5. You can add additional columns to sort if you wish.

Now Put It All in a Collection

Wrap it all in a ClearCollect:

ClearCollect(SortByColumns(Filter(AddColumns(<SharePointList>,”<NewColumnName>”, ‘<peoplepicker>.DisplayName),
<col1>=<value1> && (<col2> || <value2>) && !IsBlank(<NewColumnName>),
“<NewColumnName>”, “<anothersortcolumn>”) )

This readily works! If you dispay the collection in a gallery and include the columns in the SoerByColums expression. What lept out to me was that the items in my collection were indeed sorted by the first value but the items for the second value were completely random, or so it seemed.

Ugh. Everything looked fine. But then I switched the sort columns and now the results sorted correctly on the now first (formerly second) value whilt the second (formerly first) value seemed random.

The Fix

I took a look at the documentation for the SortByColums expression and saw that it allows “<sortcolumn>”[,Ascedning|Descending}. This optional parameter allows sorting by ascending or descending order. Because I wanted ascending and that is the default, I just ignored it.

BUT, using “<NewColumnName>”, “<anothersortcolumn>” leads the evaluator to assume the value after the comma seperating them is the optional parameter and with no match, it produces neitther an error on the expression nor gives a runtime error, just wrong results.

So the fix was so simple:

“<NewColumnName>”, Ascending, “<anothersortcolumn>”, Ascending

And magically it all works correctly.

Thanks for reading my posts. I hope I have saved you time and helped you think of clever solutions to your Power problem.

Power Automate, SQL, Trigger

Power Automate Triggers for SQL Tables

When an Item Is Created/Modified

I do know this applies to an Azure instance of SQL server, but it may also apply to on-premise SQL databases as well.

If you build an automated cloud flow and want to use a SQL table, you might be surprised as I was that the table in the database did not appear as a choice for the SQL specifications. Yes, you could do as I did – use a custom vaule for the table – which “seems” to work. All of the dynamic values for the table are available in subsequent action steps.

Once you have the flow built and add or change the table, your flow will:

  • fail with a trigger error
  • will not show any run history to help you debug

The Problem Is Your SQL Table Columns

My first flow where this came up was for when a new item was added. The simple fix to get the trigger to be recognized was to execute the following query

ALTER TABLE dbo.<tablename>
   ADD <ID> INT IDENTITY(1,1) PRIMARY KEY

Substitue your table name for <tablename> and the column name for <ID> although I did use ID.

This creates a primary key when the table does not have one. In order for Power Automate to “see” the table correctly, it must have a column type of INT. SQL identity column is a column whose values are automatically generated when you add a new row to the table. IDENTITY(1,1) tells SQL to start the value of the column at 1 and increment it by 1 for each new row.

If the table alreay has a primary key, just omit PRIMAEY KEY in the above query.

Success!

Now when I create an automated cloud flow for that SQL database, for a new item added, the table appears as a choice. To “fix” the flow I had already created, I simply edited the trigger and chose that table. The flow then ran when a new item was added to the table.

This gave me a lot of confidence to use flows on the table. Boy was I too optimistic.

When an Item is Modified….

Imagine my surprise when I tried to create a new flow on the same table but when an item is modified. The table did NOT appear as a choice in the trigger, nor did it work if I chose the table as a custom value.

Microsoft support came to the rescue. It took them some time, which I hope to spare you from having to endure.

Simply exectute this query

ALTER TABLE dbo.<tablename>
ADD <Vercolumn> INT IDENTITY(1,1)

Again, substitute <tablename> and <Varcolumn>. <tablename> is the same as before.

Aftr this query runs, the table now appears in the trigger.

emailing strings, Power Automate

Power Automate Getting Line Breaks into a String for Email Body

The Problem

I have a data source that has seveal street address lines in seperate columns: address 1, address 2, address 3, … address 5, city, state, and zip. In my flow, these are available from the trigger item “when a new item is created.” What I wanted to appear in an email the flow sends out is a “compresed” address. Because some of the fields might be empty, I wanted to avoid having it appear like:

Name Line
Address 1
Address 2
<blank Address 3>
<blank Address 4>
<blank Address 5<
City, ST 12345

Creating the Compressed Address in My Flow

The first action is to initial a variable

Variable used to get line break

In Power Apps, I would just use char(13) in a concatination expression to join two strings. I fiddled with using ASCII value 13 to do this but it was completely ingorned in the body of the email. Instead, I am using the HTML line break.

The next action is to initilize a STRING variable

Create address variable

The next action, which is repeated for each of address 2, 3, 4 and 5 is

Conditional adding address line

The expression in the function above is

What this accomplishes is:

  1. The string variable starts with Address 1.
  2. If Address 2 is not blank, the string value has <br> and Address 2 appended to it.
  3. This is repeated for Address 3, 4 and 5.

Last Steps

The variable string will now contain all of the non-blank address lines seperted with

“<br>”

So add another Append to String action with this function

concat(variables(‘varChar13’), triggerOutputs()?[‘body/ShipAddress_City’],’, ‘, triggerOutputs()?[‘body/ShipAddress_State’], ‘ ‘, triggerOutputs()?[‘body/ShipAddress_PostalCode’])

Now in the body of your email, just insert the variable. It will show as one address for each non-blank line.

By the way, if you might need to tweak the city/state/zip fields if they are blank or need some adjust, simply use a variable to get the field and adjust it.

Happy emailing!

Power Automate, PowerApps, SQL, SQL connector, SQL query

Results of SQL Stored Procedure into Power Apps

This is a follow up to my previous post on stored procedures in Power Automate

The good news was that I was able to get a rather complex query saved as a stored procedure in my SQL database working instead of using way more complex logic in Power Automate. But like the dog catching a car, I wasn’t quite sure what to do with the query output.

Respond to Power Apps

That action certainly seemed like the right approach, but the query results are an array, and arrays cannot be passed back to Power Apps. So I used a JOIN to create a string with an arbitrary value to serve as a delimiter, I used “/end” but anything would work.

The result of the JOIN was a text string that had for each row of the query:

  • The name of the column in “
  • A colon
  • The value of the column
  • And finally the “/end” to terminate the row

That sequence repeated for each row of the query result. It was a text field I could return to Power Apps.

Then What?

I played with SPLIT and a few other functions but building a workable composite funtion continued to elude me. So I’ll skip all of that and get to the very easy part.

The Action is Simply Response

I will warn you ahead of time that it is a premium action, but if you are accessing SQL that is already covered. Until it isn’t.

Anway, it takes a few steps to get RESPONSE to work.. It initally will look like this:

I have clicked on Show Advanced Options to make the JSON portion visible.

You now need to do two things;

  1. Get a value for the Body field.
  2. Get the JSON code to create from a sample.

The Body value is easy. Clck on the field, choose dynamic vales, and from the action to execute the stored procedure in your flow, choose ResultsSets Table1.

In order to get the sample code, you need to execute the flow. The easist way is to do a manual test, What you will want is part of the output from the Execute stored procedure step..

Look at the flow run results and expand the stored procedure. It will look something like this:

What you will need is the information in the body pane. You want to copy all of the contents from the { following “Table1”: until the closing ] near the end of the contens. Highlight it and then press CTRL+C to copy.

Now go bac,k to editing your flow. In the advanced settings of the response action, click on GENERATE FROM SAM,PLE. In the popup that appears, press CTRL+V to paste the contents you copied from the test run and then click to close the popup. The JSON code will now appear in your action step. Save the flow.

Back to Your Power Apps

You want to conect your Power Apps to your flow. There are a couple of ways to do this, but the one I prefer is to enable the connection in Power Apps settings for upcoming features.

Or use Power Automate on the Actions menu.

In either case, you want to initiate some action in your Power Apps like a button or icon. For the OnSelect value, do the following:

ClearCollect(colname, PowerAppName.Run(optional_parameters))

The collection will contain column headings based on the seach in the stored procedure and colmn values from each row of the result. I put them in a gallery to do other processing but you get the idea: extremely simple.

Arrays, Filter Items, Power Automate, PowerApps, SQL, SQL connector, SQL query

Power Automate and SQL Stored Procedures

Why Would You Want to Know about this?

Maybe the answer is you should know about everything, but realisticly we tend to focus on knowing what we need. Well, I found myself trying to solve a problem but relying on Power Automate alone was proving more complex, harder to create the logic I needed, and seemingly impossilbe to debug. Here is an overview of the problem.

An Inventory System Based on QuickBooks

Typical use of QuickBooks involves creating sales orders for customers and fufilling them from inventory items. When inventory items are out of stock and a sales order cannot be fufilled, a company will create purchase orders to repliish inventory. When items are received on a purchase order, the flow I needed to create had to look at all open sales orders, then check each line item on an open sales order to see if the received item matched a line item on the sales order. In other words, the QuickBooks users want to know which sales orders awaitig items that match a received purchase order can now be processed.

Can Power Automate Interact with Desktop QuickBooks?

In a word, no (QickBooks Online, yes, but they are not the same product.) But indirectly, yes. To accomplish this, I use a utility called LetSync https://www.letsyncsolution.com/. What this utility does is a bi-directional synch between QuickBooks and a SQL database. When data is changed in QuickBooks, those changes can be echoed in the SQL tables. It is easy indeed to just track certain changes, and in this case, I need purchase orders, sales orders, and inventory item tables.

To make things easier, when I first set this up, I chose Azure SQL to store the database. That has proven to be a good choice, especially for this solution. Executing a stored procedure from Power Automate is supported for Azure SQL databases.

What Is a Stored Procedure?

Simply put, a stored procedure is a collection of SQL commands stored with the database and which can be executed over and over without having to worry about what commands are used. Stored procedures can be passed input parameters and can pass back output parameters. I am not going to do a deep dive in stored procedures because there is a lot of good documentation including examples of how to create them.

I will point out that you can easilty create a soreed procedure with a New Query window in Microsoft SQL Server Managemet Stuido. Once you execute that query and see that it completed successfully, be sure and do a refresh on the server in order for SQL Server Management Studio to “see” it. It will be located in

database name->programmability->stored procedures following the System ‘Stored Procedures.

FYI if you right-click on one of the stored procedures, you can choose to modify or execute it.

To the Details

The complexity of what I needed to do is summed up in the following points:

  • A purchase order record changes, indicating an item has been received
  • From the line item on the purchase order, I can get the item inventory number
  • Using that number, I need to look first at all sales orders that have a line item with a matching inventory item in that item line
  • Frm that sales order line item, I need to find the sales order header row to get the sales order number and check related information (ISMANUALLY CLOSED, e.g.)

What I want to end up with is an array – table if you wish – that in part would look like this:

Sales Order NumberItem NumberQuantityAmountOther Information
123456ABC12345502/22/2022

A Power Automate to Accomplish This

It was quite easy to create an automated cloud flow that triggers when a purchase order record changes. From the header record, it is a simple step to get the related line items with a GET ROWS V2 and a filter that has TxnID eq PARENT_IDKEY (those are column names in the two purchse order tables). Then use Apply to each on the resutling rows to find the item(s) received.

{Note: There is not a column in the table that identifies this condition. Instead, it has to be inferred. I accomplish this by first checking the Received Quantity column for a non-zero value, then checking the custom field column to see if that quantity has already been recorded. If not, then my flow “knows” this item was received.}

I can tell you that attempts to use conditions in Power Automate to take all of the conditions into account turned into a massive and horrible mess. Not only was it very complex, but it never worked as I wanted, and it proved impossible to debug.

SQL Query to the Resuce

I admit that I am not a SQL expert but have enough experiene to feel comfortable searching for suggesions and playing with SQL queries. Within an hour or so, I had a working prototype of a query that would give me what I needed after some minor tweaking.

This single query also executes in far less than one second on thousands of records. Nice bonus.

Moreover, saving the query as a store procedure allowed me to envoke it from my Power Automate flow when I determined which line item(s) had been received (I passed the stored procedure a parameter of inventory item number) and got the results back in an array form that I could convert to HTML, for example, or use in other ways.

The Query Solution

I am going to use the test query which validated that overall this works. It does not have a parameter value, but I’ll show how to add that later.

SELECT salesorderlineret.ItemRef_FullName,
salesorder.TxnNumber,
salesorderlineret.PARENT-IDKEY
FROM salelsorder
INNER JOIN salesorderlineret ON salesorder,TxnID = salesorderlineret.PARENT_IDKEY
WHERE ItemRef_FullName IS NOT NULL AND Invoiced = 0
ORDER BY TxnNumber

This might look intimidating, but let’s break it down.

  • The three terms after the SELECT specify the table and columns (table.column) that I want to use in the query.
  • FROM specifies the source table
  • INNER JOIN acts to smash the two table results together and when the condition matches yield the resulting match
  • WHERE is a condition filter for selecting records; omit blank lines and any line items that have invoiced amounts not equal 0
  • ORDER BY specifies how to sort the results, in this case by sales order number

When the query is executed, the results appear (only first few rows showing):

Results of query

Modifying the Query

You can modify this query in a variety of ways. The most prominent of these ways would be to choose the tables and columns for your SQL database. As I mentioned earlier, when you save the query as a stored procedure, you can specify a parameter to be passed to it. Use something like this:

CREATE PROCEDURE dbo.procedurename @paramatername varchar(30) AS

followed by the SELECT statement. You would then add to the WHERE clause

WHERE @parametername = ItemRef_FullName

would be used to match item number to the passed value.

Summary

Unless you are working with LetSynch created SQL database like I was, the table and column names and conditions will be different., But when you need to find matching items between two or more tables, this should be a convenient guide to create your own solution.

Uncategorized

Running a Desktop Flow from a Cloud Flow – RPA, Execute locally on demand

The Scenario

My need was straightforward, but I never imagined the solution would be. Judge for yourself.

I have a group of Power Apps and Power Automate flows that can add rows to tables in a SQL database. Subseqnqntly I need to execute a program that detects these new rows and updates QuickBooks with them. I used Windows Task Manager to run these tasks (same program but different parameters) hourly, but the business need was to get the new records into QuickBooks as soon as the records were created.

I sweated bullets about different ways to o this but none seemed plausable or practical. For example, I thoght that inserting rows into an Exccel spreadsheet or a SharePoint list might be an approach, but then what could I do except launch another cloud flow when a new item was created?

Power Automate for Desktop

When I started playing around with this, it was not at all clear to me how to get it to work. All my experience in cloud Power Automate did not carry over at all. But a bit of play soon had me on track.

There are a lot of posts about downloading and installing Power Automate for Desktops, and it is very straight forward to do. Just be sure and log in with your M365 credentials. I’ll tell you why that is critical later in this post.

The first thing to do is to create a flow, and here is where you first notice the differences bewteen desktop and cloud: in desktop, the flow will show as a single line in the PA window and indicate that it is being edited. I was stumped. What was being edited?

Just double click on the flow and after a few moments another window opens which allows you to select actions you want the flow to process. Unlike the cloud, the desktop flow has its actions on a left hand icon panel. Categories are displayed, and by clicking on them, they expand to show you details.

On the right hand panel are inout and output variables. There is a create icon for input variables while output variables are shown when flow actions you select generat them. To add an action to your flow, click on it in the left hand pane and drag it into the flow. When you do, some actions will have a popup window that specifies action properties.

My Desktop Flow

Recall that what I wanted to do is execute a program when new rows we inserted in the SQL tables. Although the same program is executed to update QB, a different argument is used to identiy to the program which table(s) to use. i decided to use an input variable to determine how the flow would proceed. The first thing I did was to create that input variable.

Next, I needed to check the variable contents, so I dragged the If action and compared it equal to one of the possible values. In betweenthe If and Endif created, I dragged a run app action. In the properies window for that action, I browsed the directories to point to the location of the utility, and added the corresponding argument for that condition. I then dragged a stop flow action after the app runs.

My Cloud Flows

How, you might ask as I did, do you get the desktop flow to run when you want it to? It could not be easier. First recall I mentioned it was important to log in for your desktop Power Automate. Doing so registers the local machine in your account and enables direct communication between the local machine and cloud flows.

You can see this right away by opening Power Automate and logging into your account. Under My Flows, there is a Desktop Flows tab and the flow you created should appear and look something like this:

Desktop Flows

Now create a new cloud flow or edit an existing flow. I’ll show you a tiny manual flow that launches the desktop flow.

Simple flow

The action is “Run a flow built with Power Automate for desktop.” When you select it, you must specify a desktop flow name but the names will appear in a dropdown list. You also have to specifiy Run Mode, attended or unattended. If you choose unattended, the user cannot have any open windows, and it seems to me that they can’t be logged on – but that’s not authoratative coming from me. Attended will work with the user logged on.

Set priority to your choice, and in my case, I also provided a value for the input variable.

That’s it!

What could be easier? Of course you can use this in a great many ways and for a great number of things. But the basic concept remains the same: communicate between your cloud flows and desktop to perform desktop level activities.

PowerApps, Sequence Numbers, Uncategorized

Creating Multiple Records with a Sequence Number in ForAll

Scenario

I suspect this is a common situation, but I had a collection of items representing a set of records to add to a data source. More specifically, these were being added to a SQL table as child records of a parent table item:

  • The parent record is in another table
  • Each child record (that I am creating in the ForAll) has the parent record key
  • Each child record also has a sequence number column

The sequence number starts at 1 and increases by 1 for each record.

The Problem

At first glance this seems to be no problem at all. Just set a variable and increment it for each new record. But, if you don’t know, SET() is not valid in a ForAll loop. Neither does UpdateContext().

This is how I accomplished getting this to work.

How to Do This

  1. Before the ForAll statement, use this statement (edit to suit your needs)

    ClearCollect(colSeqNo, Sequence(CountRows(coLLineItems)));

    This creates a collection of one-column rows with a column name of Value and starts at 1 and increments by 1 for a count of the value in () after Sequence. I used the count of rows in the collection containing the items to create new records.
  2. ForAll(colLineItems, Patch(<dataource>, Defaullts(<datasource>),
    SeqNum: Value(First(colSeqNo).Value), ….

  3. Of course, add additional fields in your Patch statement.
  4. After the Patch statement,

    Remove(colSeqNo, First(colSeqNo)) // remove first row toget seq number

That’s It. Why It Works

Here’s a view of the collection for Sequence:

View collection just shows the first 5 rows.

The First() operator gets the first row of the collection. So it gets the next sequential number for the sequence number field. Remove(First(…) deletes the first row, making the next sequential number the next row used, i.e., it biomes the First row.

You probably will find more uses for this technique than this one, but this solved a big problem for me.

I welcome your feedback and encourage you to post any comments or questions.

PowerApps, QuickBooks, SQL, SQL connector, Uncategorized

Invoices or Sales Orders in Power Apps

Introduction

While this post describes using a SQL database, it is equally applicable to SharePoint lists or just about any similar data source. In particular, the SQL database I used is resident in an Azure instance but that, too, makes little difference. The actual Power Apps I built requires a Premium license; however, if SharePoint lists were used instead, the Premium license requirement would go away.

So why, you might ask, did I select SQL as the data source? The answer is simple. The underlying data for the SQL tables comes from QuickBooks Desktop (Enterprise in this particular case). A utility, LetSync, a Windows desktop program, permits bi-directional synchronization between QuickBooks and a SQL database. You can choose what data to sync to SQL and how often to do it.

Wait a minute, you are probably asking, QuickBooks already has the ability to create sales orders. Why duplicate that feature? Consider this scenario: the company has large distribution warehouses with several thousand items. Many are large and instance specific; think specialty lumber products. Their customers like to come to a warehouse and pick individual items. It did not prove to be practical for a salesperson to try and create a sales order on a laptop or tablet. Note that you can only use QuickBooks Online with an iPAD or Android tablet or smartphone.

I will emphasize again that while this example does use the LetSync-maintained SQL database to both get data and to create a sales order that ends up in QuickBooks, the overall design and functions of the app are not at all dependent on using that facility.

The App Overview

When the app starts, the home screen shows up like this:

The requirement is to first select a customer for the sales order (note again that this could be an invoice in your own application). The combo box at the upper right is searchable and has this property

The SQL table is “customer” and has a column “FullName”. QuickBooks also uses a Boolean column, IsActive, and I filter by that to only use active customers.

Another important property of the combo box is OnChange whose value is

ClearCollect(colCustomer,Filter(customer,FullName=cmbCustomer.Selected.FullName));
Set(varCustomer,galSelectedCust.Selected.lblCustName.Text);
Set(varBillAddr,galSelectedCust.Selected.lblBill1.Text &
If(!IsBlank(galSelectedCust.Selected.lblBill2.Text), Char(13) & galSelectedCust.Selected.lblBill2.Text,””) &
If(!IsBlank(galSelectedCust.Selected.lblBill3.Text), Char(13) & galSelectedCust.Selected.lblBill3.Text,””) &
If(!IsBlank(galSelectedCust.Selected.lblBill4.Text), Char(13) & galSelectedCust.Selected.lblBill4.Text,””) &
If(!IsBlank(galSelectedCust.Selected.lblBill5.Text), Char(13) & galSelectedCust.Selected.lblBill5.Text,””) &
Char(13) & galSelectedCust.Selected.lblBillCity.Text & “, ” & galSelectedCust.Selected.lblBillState.Text & ” “
& galSelectedCust.Selected.lblBillZip.Text
);
Set(varShipAddr,galSelectedCust.Selected.lblShip1.Text &
If(!IsBlank(galSelectedCust.Selected.lblShip2.Text), Char(13) & galSelectedCust.Selected.lblShip2.Text,””) &
If(!IsBlank(galSelectedCust.Selected.lblship3.Text), Char(13) & galSelectedCust.Selected.lblship3.Text,””) &
If(!IsBlank(galSelectedCust.Selected.lblship4.Text), Char(13) & galSelectedCust.Selected.lblship4.Text,””) &
If(!IsBlank(galSelectedCust.Selected.lblship5.Text), Char(13) & galSelectedCust.Selected.lblship5.Text,””) &
Char(13) & galSelectedCust.Selected.lblShipCity.Text & “, ” & galSelectedCust.Selected.lblShipState.Text & ” “
& galSelectedCust.Selected.lblShipZip.Text
);
Set(varCustSelected, true);
ClearCollect(colSalesTaxItem, Filter(itemsalestax, ListID=Text(galSelectedCust.Selected.lblItemSalesTaxListId)));
If(galSalesTaxItem.Selected.IsActive, Set(varSalesTax,true), Set(varSalesTax, false)); // if sales tax rate iactive set to false
If(galSalesTaxItem.Selected.TaxRate=0,Set(varSalesTax,false), Set(varSalesTax, true)); // if sales tax rate equals set set to false
Set(varSalesTaxPerCent,Value(galSalesTaxItem.Selected.lblTaxRateDec.Text)) //set tax rate as percent. stored as integer

Let me simplify this a bit for you.

  1. The customer record is used to create a collection
  2. A Boolean variable is set to true; this variable changes the display mode for the icon to add line items to edit from disabled when true.
  3. A text variable for billing and shipping address is created which excludes blank lines. These become the Text properties of the two address blocks.
  4. Sales tax information is created in a collection from the table ‘itemsalestax’.

Once a customer is selected (in this case by typing “power a” in the combo box, the screen now looks like this:

Note that the customer account number, billing and shipping address have all been completed from customer data. Also note that the + icon to create new line items has been enabled.

P.O. No., Rep and Project are optional fields to enter. Rep comes from a SQL table and the other two are simply text fields.

Adding a line item to the sales order

This was the bits that proved to be tricky. I did not want to use the SQL table “salesorderlineret” as a data source, which would have made some things easier but harder to undo if the sales order was abandoned or line items changed, so I choose to use what I will describe now.

In the App OnStart, I do some housekeeping by initializing some variables, but the pertinent actions here are

ClearCollect(coLLineItems,{Item:””, Desc:””, Ordered:””, UM:””, Rate:0, Amount:0,SeqNo:0,SalesTax:0});
Clear(coLLineItems);
ClearCollect(colSeqNo,{SeqNo:0})

The collection “colLineItems” has several elements defined, although the only one that is directly used is SeqNo. You also see that SeqNo also is part of a single element collection and starts off with a value of 0.

When the + icon to add a line is clicked, the OnSelect actions are

Set(varThisAmount,0);
Set(varThisSalesTax,0);
Set(varSeqNo, varSeqNo+1); // increment sequence number
// Now create new row by patching collection
Patch(coLLineItems,Defaults(coLLineItems), {Item:””, Desc:””, Ordered:0, UM:””, Rate:0, Amount:0, SalesTax:0, SeqNo: varSeqNo})

The two variables will come into play later, but then varSeqNo is incremented by 1. A new row is patched into the “colLineItems” collection and whose critical value is the sequence number.

Now a new line has been added and the screen shows

  • On the left is a trash can icon; clicking it removes the line item row.
  • Description gets its value from the selected combo box. It’s Text property is If(IsBlank(cmbItems.Selected.Name),””,cmbItems.Selected.SalesDesc).
  • Ordered Quantity is a text input field with a number format. It has two important properties:
    OnSelect:
    Set(varThisQty,galLineItems.Selected.txtIQuantity.Text); // Get current QTY
    Set(varThisAmount, varThisQtygalLineItems.Selected.lblRate.Text); // Get current amount not including sales tax Set(varThisSalesTax, varThisAmountgalSalesTaxItem.Selected.lblTaxRateDec.Text);
    // if sales tax to be computed add to sales tax line
    If(varSalesTax,Set(varThisSalesTax, varThisAmount*galSalesTaxItem.Selected.lblTaxRateDec.Text))
    OnChange:
    // Back out any previous amounts from extended price and sales tax
    Set(varSalesTaxAmt,varSalesTaxAmt-varThisSalesTax);
    Set(varSOTotal, varSOTotal-varThisAmount);Set(varAmount, galLineItems.Selected.txtIQuantity.TextgalLineItems.Selected.lblRate.Text); Set(varSOTotal, varSOTotal + varAmount); // if sales tax to be computed add to sales tax line If(varSalesTax, Set(varSalesTaxAmout, varAmountgalSalesTaxItem.Selected.lblTaxRateDec.Text));
    Set(varSalesTaxAmt, varSalesTaxAmt+varSalesTaxAmout)
  • UM (unit of measure) has Text property of
    If(IsBlank(cmbItems.Selected.Name),””,cmbItems.Selected.UnitOfMeasureSetRef_FullName)
  • Rate has a Text property of
    If(IsBlank(cmbItems.Selected.Name),””,cmbItems.Selected.SalesPrice)
  • Amount has a Text property of
    Text(txtIQuantity.Text*lblRate.Text,”$ 0.00″)

You might wonder why I didn’t reference the gallery where the line item collection is the Items property. Well I did, lots of times. I could just never get it to work properly. In particular, the combo box to select the items just wasn’t a column in the collection.

You also might be wondering about the OnSelect and OnChange properties of the ordered quantity text input. Here’s why they work.

On a newly created row, the calculated amount and sales tax will be 0, so grabbing those in the OnSelect property foe amounts and then backing them out of total amount and total sales tax has no effect. However, it the ordered quantity is subsequently changed, then those amounts will be the last calculated BEFORE the quantity is changed. This enables the calculated amounts to be backed out first, then the new calculations used to redo the total amount and sales tax for the order.

You might have spotted that the + icon to create a new line was back to disabled. Once an ordered quantity is entered, the icon is reset to edit mode and a new line can be added.

Once a quantity is entered for a selected item, you will see

The amount is shown as the ordered quantity times the rate, but note that at the bottom, the total is the sum of the amounts AND the sum of the calculated sales tax.

Discarding the Sales Order

Sometimes you either want to discard the sales order and start over or just not record it. There is a button for that. In addition to resetting some variables, the key actions are to clear the collections for line items, customer and sales tax items.

Saving the Sales Order

Although the actions for this button are lengthy, the actual actions are simple. First, the overall sales order information is saved in the table “salesorder.” Then in a ForAll action, each line item is saved in “salesorderlineret”. This is where SeqNo is important. QuickBooks requires a sequence number for each line item in order for them to display properly on the screen, print or email. So sequence numbers starting at 1 and increasing by 1 are saved in the collection of line items. In the patch actions, the reference “gallery.select.<lblname>.Text” is used to access the values needed. A change from how each label’s Text property is used.

How the Sales Order Gets to QuickBooks

In the “salesorder” table in SQL, an extra field named “operation” is used by LetSync. When a new item is to be sent to QuickBooks, its value is “ADD.” When the sync task for sales order runs, then LetSync does the following:

  • Sends the sales order header information to QuickBooks
  • Gets the updated information from QuickBooks (QB adds sales order number and other specific data)
  • Updates the recorder in “salesorder” with this information
  • Adds the line items to the QuickBooks sales order
  • Deletes the line items in “salesorderlineret”
  • Adds new line items from QuickBooks for the sales order

Once this is completed, the new sales order will appear in QuickBooks. You can also do things like create a Power Automate flow when a Sales Order item changes and say, create an email with the newly created sales order number and line items detail, or whatever fits your business needs.

If you have no reason to interface with QuickBooks but want to create invoices or sales orders, or things that have a “header” and line item concept, you should be able to modify these techniques to match your data sources and particular needs.

I hope you will post your comments and let everyone know how you have enjoyed and used this in your work.

Arrays, Filter Items, Power Automate

Find Matched or Unmatched Rows between Two Arrays in Power Automate

The Problem I Wanted to Solve

I have a set of Power Apps and Automate flows that work with both a SQL database and SharePoint lists. The SQL tables get updated in pairs, rather like an invoice that has one header record and then in a separate table multiple records for line items. The columns in the SQL table are pre-determined, but I needed to house a number of additional columns of data. One of my flows was triggered when a new row was created in the header table. I created a corresponding record in one SharePoint list, then retrieved the line item records from another SQL table and created the line item records in another SharePoint list.

But, the source of the SQL records behaves a bit differently when the item is changed as opposed to initially created. The header information is updated with whichever columns changed, but ALL of the line item records are deleted and replaced with a new set.

The new record set could be any one of the following:

  • The records match one for one even if some of the contents have changed
  • There are new records that don’t match any previous one
  • There are old records that don’t match any of the new ones.

I needed a flow that would handle all three cases without being enormous and complex.

Arrays Make this Easy

I am ignoring the header being updated because it is only one record that changes. The ID of the record is passed from the triggering event, so it’s a matter of getting the SharePoint record and updating the fields. Oh yeah, there is a “user data” field in the SQL table where I store the SharePoint list ID of the corresponding record.

Next, my flow action was Get Rows for the line items. The filter in that action uses the key field of the header to match the stored key field in the line items. It then uses Get Rows but for SharePoint to get the previous corresonding line items in the SharePoint list.

Each of these actions returns an array of records. This is a great thing for building out a flow that is not only short and efficient but also executes blindingly fast. The overhead of Apply to Each is not needed..

Determine Which Records Are Not in the New Record Set

You may be amazed that this can be done with one flow action! It is conceptually the EXCEPT operation found in SQL. While there is no flow function for EXCEPT, you can use a Filter Array action to accomplish it.

Here is a test flow that illustrates how this works:

The first action creates array A1 and the second creates A2. The first element of A2 exists in A1 but the 2nd two do not. When it executes, here is the result of the Filter Array

The output is also an array but contains only the two rows from A2 which had no matches in A1.

Getting the New Items in the SQL Table

A similar method gives a resulting array of records that are new and were not ones there before. Simply reverse the arrays in the Filter Array action.

What if You Only Want to Match Items Based on Certain Columns?

This is precisely what I needed to do. For example, a line item might have a value in it change (like price or quantity in my invoice concept), but it is not a completely new row and just needs updating. In that case, you will need to alter the Filter Array value on the right hand side from items() to the following format:

item()[‘<column1name>’]

<‘columnname>’ is a placeholder for a SharePoint list column, SQL table column, an Excel column or whatever data you use column. For example, in the array A1 or A2 created in the test flow shown above, Name and ID are column name. Enclose the column name i [] and also single quotes like

item()[‘Name‘]

What if You Need More than One Column to Match?

I will create another post on the way(s) in which this can be done. Stay tuned.

Uncategorized

Network Issue Suddenly Appeared on Server – Could not be reached but could reach all other devices

Windows Server Configuration

The particulars of the server may be unimportant to the problem and solution, but nevertheless I am including them.

  • Windows Server 2016
  • Dell R420
  • 32GB RAM
  • 2 NIC onboard, Broadcom
  • Running Hyper-v

The Symptoms

I got a call from users today telling me that suddenly, an hour or so into the workday, they lost connection to file shares on the server. Some remote diagnostics didn’t turn up much but tried the following:

  1. Tried pinging the server from the network, and that worked as expected.
  2. Tried browsing to the server via filel explorer as \\192.168.100.2 but that failed.
  3. Rebooted the server but no change.
  4. Rebooted network computer and also no change.

There is a second server, and I noticed there was a DHCP warning that it could not communicate with the other server. This showed on both servers.

I went down DHCP rabbit holes for a short bit, but still nothing.

Checked NIC Settings

The primary NIC adapter seemed just fine, and it turned out that all the shares were based on not the IP address for that NIC. So I tried using file explorer on the other NIC IP address, and to my surprise it worked just fine. But no matter how I jiggled the first NIC settings, it was still inoperative.

What Finally Solved the Problem

Using Device Manager, I uninstalled the NIC. As soon as the uninstall completed, I did a scan which found and reinstalled the NIC. I then set the static IP address and DNS server list, and then it was back fully working.

Not really certain why the server responded to pings but not to file shares. But the fix was easy. Once I figured out the target.

One thing I can point out is that if you are remotely accessing the server over the problem NIC, you will lose connection when it is uninstalled. I ended up on site to do this.

Geospatial, Maps, PowerApps

How to Use Address from a Gallery Item for a Map

Overview of the Conditions

Suppose you have a gallery of items, each of which contains an address. By choosing an item in the gallery, you then want to display a map of where the item address is. The following will give you a step by step description of how to accomplish that.

Understanding a Bit about Power Apps Maps

In order to use the map control, you will first have to enable it in Power Platform Admin Center. You must be an administrator to do this. Navigate to


https://admin.powerplatform.microsoft.com/environments?l=en-US

The initial screen will show you a list of environments. Choose the correct one, and then click on the environment name. On the next screen select Settings.

Power Platformadm,in center
Settings

On the settings page, expand Product and then click on Features.

Product/Features

On the Features page, turn on Geospatial Services.

Geospatial Services

Now you can start to work with maps in Power Apps.

Let’s Start with Our Gallery

On the first screen of the app, I created a gallery and populated it with several lines of address information from a SharePoint list. I also added a map pin icon which, when selected, navigates to another screen that contains the map; But the first OnSelect action is to create a collection of the addresses for the selected gallery item.

Gallery of Addresses

The OnSelect for the icon has these expressions:

ClearCollect(colmap,colgallery);
RemoveIf(colmap,gallery1.selected.txnid<>txnid);
Navigate(screen2)

To explain a bit, colgallery is created in the OnVisible of the gallery screen and is the result of a ClearConnect from the datasource. gallery1 is the name of the gallery in the above screen, and txnID is a field in the collection that is hidden on the display. colMap first gets all the items but is then reduced to just the selected one.

The Map Screen

Let’s start with a blank screen and then insert controls to make the map functional. First, let’s insert a map.

Click on the Insert tab, then Media, and dropdown to see Maps. Click, to add it to the screen. (Note, the lovely diamond next to it means it is a premium service and you have to have a license supporting it.)

Insert Maps

Here’s the screen with the map and a gallery; the gallery has colmap as its source and the address fields added. These will be used in a bit.

Now we need to add another control. With Insert selected, click on the Input dropdown and scroll to the bottom, choose Address input.

Address input

Now our screen looks like this.

Address input added

Configuring the Controls to Tie Them Together

There are several things we need to do to make the map functional and reflect the address from the gallery item.

  1. Insert a button that when selected executes to get the address to the map properly,
  2. Modify the advanced settings on the map to interact with the button.
  3. Set some properties on the Address input control.
  4. And add expressions to the OnVisible screen parameter.

Button OnSelect

You are free to choose other names besides the ones listed below. But then you must refer to those names (collection and the two column names) on the map control properties. See the Map Control heading topic for the instrucitons.

If(IsBlank(AddressInput1.SearchResultJson), ” “,
ClearCollect(colAddress,
{Longitude: Text(AddressInput1.SelectedLongitude), Latitude: Text(AddressInput1.SelectedLatitude)})
)

What this does is to create a collection with two values: Latitude and Longitude created from the address in Address input. Whatever names you use will be reflected when the map properties are set.

Address input Properties

Default is the first parameter. What we are going to do is set the concatenated address values to be the default. So that becomes

lblAddr1 & “” & lblAddr2 & ” ” & lblAddr3 & ” ” & lblCity & ” ” & lblState

You might want to create a label on the page with the text set to the expression and then hide the label. Either way works.

Radius is an optional but likely parameter to change. It is the distance in meters to limit the search. A mile is a bit less than 1610 meters, so 10 miles would be set to 16100.

SearchLimit ios the number of items you want to limit the search to return. Searchwithinradius is a Boolean value restricting searches to be within the radius.

No other parameters are required but you may find them useful.

Map Control

The first thing you must do is to set the Items property. If you use the right hand properties column, the dropdown will show a list of choices. Make sure you choose the collection you create in the button expression shown above!

Next. using the control properties on either the left or advanced right hand column, set values for ItermsLatitude and ItemsLongitude. The acceptable values, which will appear as prompts. are the column names from the collection specified for the Items property of the map; These are text fields which must be the column names in the collection.

OnVisible Property for Map Screen

Because there is a default value for Address input, which in this app has street, city and state but no zip code, it probably enough to get a map location. To do that, copy the expressions from the Map Address button and put in the OnVisible property. When the map first displays, this may work without additional user effort. It all depends on your data for the address.

How This All Works

Now that the pieces have been tied together, let’s see how this all works. In the screen below, a gallery item was selected and then the map screen appears. The address values from the gallery item show up and appear in the Address input field via default values. In this case, I deleted the city value for illustration purposes, and then the actual address showed as a suggested one.

SuggestedAddress

Next, I selected the suggested address and clicked on the Map Address button.

Mapped Address

You can see the map changed and the actual address is pinned on the map.

Another Cool Trick

ne of the map control properties, SatteliteView (Boolean), determines if the street or satellite view is displayed. You can dynamically control this in your app by adding a toggle. The OnSelect expression should look like this:

Set(varSatelite,Toggle1.Value)

Then, set the sateliteview property of the map control to varSattelite. As the user turns the toggle on and off, the map display changes accordingly.

Satellite View

Many More Things to Do

While the focus of this post has been to display a map based on a selected address from a gallery, it is just a start of what you can do. Here are some other ideas.

  • Select multiple addresses and pin them to the map
  • Determine the distance between one location and one or more others, including “your location”
  • Turn on card identification in the map control and get additional information (like name, e.eg.) from the address gallery
  • Save the latitude and longitude values from the map to your address list

And many, many more. You should experiment with the map control properties, and using the basic understanding of how it works,. discover new ways of using it.

Thanks for reading this post.

List, Lookup Field, Patching, SharePoint, Uncategorized

Patching a People Field in SharePoint when You Only Have a Name

Patching a People Field Requires Specific Information

You may or may not know that in order to patch a people field in a SharePoint list, you must provide information about the person the data applies to. In the app I had to create, I only had the names of people who all were Office365 users but the name information did not itself come from a people field. So I had to access that data indirectly from where I did the patch.

What Power Apps Requires to Patch a People Field

Patch needs a format like the following for the People field:

{‘@odata.type’:”#Microsoft.Azure.Connectors.SharePoint.SPListExpandedUser”,
Claims:”i:0#.f|membership|” & drpEmail.Selected.Mail,
Department: drpDisplayName.Selected.Department,
DisplayName:drpDisplayName.Selected.DisplayName,
JobTitle: drpTitle.Selected.JobTitle,
Picture: txtImaage.Text,
Email:drpEmail.Selected.Mail
});

I have omitted the list and ID information of course, focusing on the field data.

Note there are Power App elements from my app in this:

  • drpEmail.Selected.Mail
  • drpDisplayName.Selected.Department
  • drpDisplayName.Selected.DisplayName
  • drpTitle.Selected.JobTitle
  • txtImage.text

Each of these values gives a required Patch value.

How I Did It

First I created a new screen, and then inserted these controls:

  • A text input field. For its default value, I created a variable txtField that elsewhere gets set to the name that I want to use to update the people field.
  • A l;abel field with a text value of User().Image
  • A dropdown that has an Items value of Office365Users.SearchUser({searchTerm:TxtFieldText}) for each of the parameters required for the patch: claims, department, display name, job title, picture and email.

When you add dropdowns, they will all seem the same and essentially won’t provide any useful information. To fix that, follow along in the next section.

Getting the Dropdowns to Have Correct Values

There is nothing complicated about this.

  1. Click on a dropdown after making sure it has the correct Items value shown above.
  2. On the right hand side of the screen look at the properties column.
  3. The Value dropdown lets you choose the specific data to be shown.

4. Just select the value you want for the dropdown and repeat for the remaining ones.

Bringing It All Together

Here’s the cool part: when you set the variable txtField to the name of a user, the dropdown and text image fields will all correctly populate. You can then use their values in the patch for the people filed in your SharePoint list.

Caution!

I have not test this to use when multiple users have the same name. But instead of using the display name, using the email address for the variable does indeed work. That doesn’t help a lot if you – like me – don’t have the emails available at the point where you need to save a user name in the SharePoint list.

Parting Suggestion/Tip

For a complex patch or one with many columns, I often set a variable to use in the patch. For this patch, here is what I used:

Set(varRequesterClaim, {‘@odata.type’:”#Microsoft.Azure.Connectors.SharePoint.SPListExpandedUser”,
Claims:”i:0#.f|membership|” & drpEmail.Selected.Mail,
Department: drpDisplayName.Selected.Department,
DisplayName:drpDisplayName.Selected.DisplayName,
JobTitle: drpTitle.Selected.JobTitle,
Picture: txtImaage.Text,
Email:drpEmail.Selected.Mail
});
Patch(SalesOrders, LookUp(SalesOrders,ID=varID),{‘Deliver Driver’:varRequesterClaim});

I do this for two reasons. First, I think it is easier to make changes to the expression for the variable than to make it inside the Patch command. The second is for debugging; you can always examine the variable if the patch isn’t working as you expect to see what exactly it is executing.

This was a fun hack to figure out. Hope it helps you, too.

image Email attachment, Power Automate, PowerApps, Shared mailbox

Send email from Shared Mailbox with image attachments

You Need to Use Power Apps and Power Automate

From Power Apps alone, this is quite simple, but the restriction is that you cannot send from a shared mailbox. Instead, emails will be sent from the mail account of the logged on user. In many applications, a Power Apps user is performing a function that needs emails to be sent from the business but not from their individual email. Hence, Power Apps needs to call Power Automate to send the email.

Not so fast, though. You have to pass the Power Automate flow parameters required to send the email. Email addresses and email body are straightforward enough, but it takes a bit to deal with the image files.

About the Power App I Created

It probably does not matter a great deal what your own app does, but there are a couple of key things I have done that may affect how you use this information to create your own.

  1. I have both a pen input screen and a camera screen, the former to capture a signature for delivery of orders and the latter to take a photo of the delivered items.
  2. When the pen input is captured, and when the photo is taken, the app first calls a Power Automate flow to save the image in a separate SharePoint picture library, and then the app saves the URL of that saved item into a SharePoint list hypertext column.
  3. The URL will be of the form

    https://<tenentname&gt;.sharepoint.com/sites/<sitename>/<ilename>jpg
  4. This format is important because all you want to use in the flow is just the <filename<.jpg part. That will be described later.
  5. In the app, getting the full file name is just a matter of getting the column value as a string. But here’s a trick

    Set(varSigAttachment,
    Substitute(Text(galDelSO.Selected.’Delivered Signature’),
    https://i<tenantname&gt;.com/sites/<sitename>/”,””));

This expression creates a variable, varSigAttachment, which has a string value of just the file name. The Substitute action replaces all of the leading URL data with nulls.

Now Create the Power Automate Flow

There are numerous blogs and videos on exactly how to do this, so I won’t duplicate it here. Instead I will describe how the flow should be built.

The first few steps of the flow should initialize variables for each of the parameters passed from Power Apps. In my case, I had these in this order:

  • TO email address
  • CC email address
  • Body text – HTML text in my case
  • variable for pen capture image file name
  • variable for photo capture image file name

Each of these variables should be string type.

The next two steps are repeated for both the pen input image and the camera photo.

  1. Get metadata for file path. Since my images are in a SharePoint library, I chose SharePoint file path. I then had to specify the site and library from dropdowns, and for the file path, I used the variable created for that file.
  2. The next step is for file content. It’s very similar to get metadata in step 1.

Repeat these two steps for the other image (or more if you have them).

Lastly, add a step to send an email from a shared email box. Use the parameter variables for TO, CC, and body (plus others you might need).

For Attachments, do the following for each:

  1. For the name, use the dynamic variable Name from the Get Metadata action output.
  2. For the attachment content, use the dynamic File Content from the Get File Content output.

Note that the dynamic variable appear in newest to oldest order, and that each get metadata and get file content action will create a similar set of variables. Choose the correct one from the newer or older set.

That’s It!

This flow doesn’t return any values, but you certainly could if you wanted to.

QuickBooks, SQL

A Bit of QuickBooks Trivia

Getting QuickBooks Data

Maybe you are one of the few QuickBooks Desktop users who are either developing apps for use with QuickBooks or want to create custom reports not doable within the desktop version. In that case, you are likely to use the QB SDK or a third party product that will make the data available via Microsoft Access or as a SQL database.

I had reason to do the latter.

Getting Sales Orders, Invoices and Purchase Orders Data

If you look at the database tables, you will see there are two tables related to each of these entities. I’ll use Sales Orders as the example, but the other two are completely analagous.

For sales orders, the two tables are “salesorder” and “salesorderlineret”, the first containing one row for each sales order and the second containing one row for each item on the sales order. For example, if a sales order has four line items, salesorderlineret will have four rows, one for each line item.

How Do Those Tables Relate to Each Other?

In other words, if you locate a sales order in tables salesorder, how do you find the corresponding line items in salesorderlineret?

The answer is that data in a field, or column, in salesorder table, is also in a field in salesorderlineret.

Salesorder Table

Fileld Name: TxnID contains a QB-generated value that is essentially a unique value for that table.

Salesorderlineret Table

Field Name: PARENT_IDKEY contains the value of TxnID from salesorder table

How to Get an Entire Sales Order

I am going to use the SQL query to illustrate this. Suppose the TxnID value is 10002E-1343162172. This value comes from the salesorder table row referring to the sales order. Then do this query:

SELECT * FROM SALESORDERLINERET WHERE PARENT_IDKEY = ‘10002E-1343162172’

The result will be one or more rows where the PARENT_IDKEY matches the TxnID of the salesorder entry.

Swap out invoice or purchaseorder for salesorder and you can do the exact same thing.

Power Automate, PowerApps, SQL connector

On-Premise Connector for SQL

On-Premise Connector for SQL – How to use in Power Apps and Power Automate

Power BI, Power Apps, and Power Automate can all access SQL databases whether in Azure or on-premise. If you have the latter, you will need to install and configure the on-premise data gateway available from Microsoft. There is no charge for this gateway but licenses are required for SQL Server and for the Power Family products that might use it.

I am not covering how to install or configure the gateway in this post, but I will tell you it is quite easy and straightforward to do. There is an excellent article on how to do this here and includes a download link for the installation.

What is the purpose of this post then? To save you what for me were several days of frustration trying to get it to work with my SQL database on my server. I went down some rabbit holes before figuring out what was in the end a most simple and easy solution.

What Not to Do

Cool I thought. I can now just go to Power Apps, add this as a data source and I would be ready to go. That was two days ago, and only today was that a reality.

In PowerApps after a new canvas app appeared, I chose View->Datasources then selected SQL as a connector. A window opened on the right hand side of the screen with a dropdown to chose Azure, SQL or Windows authentication. I had set up my SQL server to use Windows authentication and picked that option.

The fields that I then had to enter were SQL server, database name, user name and password. The gateway was already selected and that made me confident. So I (wrongly) entered the name of the database, my table, and user name and password. I assumed that the user was the Windows Server user with access to SQL and tried it with and without the domain name. Neither worked and the error was essentially cannot find SQL server.

After checking firewalls on the server, gateway configuration and more, I tried again. Still no luck.

All of a sudden it occurred to me to use the computer name for the SQL server name. Don’t ask me why it was not my first try. I then went through several connect attempts (with and without domain name, work account) but now each attempt gave me a credentials error. More checking server, SQL and firewall connections.

What DID Work

I went noodling around in my Power Apps home page, expanded Data and then clicked on connections. I thought what the heck, let me try adding it here. I searched for SQL and then clicked on it. There was a drop down to chose Azure, SQL or Windows Authentication and I chose the last one. Then boxes appeared.

  • For SQL server name, use the computer name where it’s installed.
  • For SQL database name, use the name of the database you have created.
  • For user name, use the Windows domain and user name that is authenticated for that SQL server.

Your on-premise gateway should appear. I assume if you have multiple gateways on different servers, you could select; I only have one.

After a few seconds, I got a successfully connected message. That connection now shows in my Power Apps list of connections. Wonderfully, without doing anything else, it also appeared in my Power Automate as well.

blinking labels, PowerBI, Splash Screen, Timer control

When Only a Blinking Message Will Do

Why I Needed This

I developed a PowerApps for a client that is being used to submit sales and client outreach information. One of the fields is a sales amount, and far too often users fail to enter a decimal point. Instead of $1521.40, for example, the amount will be submitted as $152,140. Rarely are amounts on a report over $10,000.00, so I suggested that a warning message be displayed right next to the amount field.

Of course, over the next few days several were submitted over $10,000.00 and were incorrect. The users, who are used to just entering data, said they never saw the message.

Blink to the Rescue

The human brain is hard wired to detect movement and often doesn’t “see” static images, so blinking text is effective. And it’s simple to do. Here’s how.

  1. In app start, set a variable to false. I used varBlink.
  2. Create a timer control on your screen.
    1. Set Repeat to true.
    2. Set Start to determine the warning condition. For example, I used “If(Value)txtAmt)>10000, true, false)
    3. Set Duration to the blink rate; I used 500 for 1/2 second.
    4. Set OnTimerEnd to “Set(varBlink,!varBlink). This simply toggles the true/false value of the variable.
    5. Set Visible to false. You will want to hide this control.
  3. Create a label with Text set to the warning information. Set Visible on the label to varBlink.

What This Does

When the condition evaluates true, the timer starts. After the duration is complete, the value of the blink variable is swapped between true and false. The warning message displays when the variable is true and is not visible when it is false. Because this changes every half second (in this example), it displays for half a second then goes away for a half second. Because Repeat is set to true, the timer continues to run.

You can stop the timer by having the condition that started it change. In my case, if the amount field were edited to be smaller than $10,000.00, it stops automatically.

Other Things You Can Do

You can use other properties of the label to create more dramatic changes. For example, use the Color property to alternate between two colors using similar logic. Or change the Fill property, or change them both. You can’t accomplish this with the same variable because evry half second the warning disappears. Perhaps you can use a second time and variable where the duration is twice as long as the first, and start them at the same condition. You might also toggle a second variable whenever the warning becomes visible, an equivalent time.

custom list as calendar, date picker, Date/Time, Date/Time drop downs, date/time dropdowns, Patching, PowerApps, Uncategorized

Date and Time Controls Combine to Create DateTime

Scenario

In PowerApps, I had to create a custom list to mimic a calendar.  Why?  The calendar had to, in various PowerApps that would use it, be able to provide information on a particular date; that information would include whether or not it was a holiday, whether the business unit was closed, birthdays and anniversaries, and events that were not all day but might be a partial day or span several days.

You might be thinking:  why not just use a SharePoint calendar?  The answer is simple.  As seen from PowerApps, only the first even is available.  If you connect PowerApps to a SharePoint calendar and then create a gallery to display events, it is easy to see this.  Moreover, the elements of a recurring event are not visible in PowerApps.

My PowerApps Layout

I’ll just show you how I am adding an event.  I wanted to be able to easily hide or show this functionality, so I created a gallery that is not connected to anything and put text input, date picker, and toggle controls onto the first item pane.  I can jut toggle the Visible property of the gallery to false to hide it.  It also allows using the .Selected value to interact with other parts of the app on those gallery fields and controls.

Here is that part of the main screen:

The Event Description maps to Title, and so on, but the controls pertinent to this post are the date and time fields.  I am only going to deal with Start Date/Time in this post; everything applies to the End Date/Time as well.  In a subsequent post, I’ll discuss recurring events.

All Day Toggle

On the toggle control, I set the OnChange property to

If(tglAllDay.Value,Set(varAllDay,true),Set(varAllDay,false))

I also changed the toggle values to Yes/No instead off On/Off, but that affects nothing but the user interface.

When the user submits the event to be added, the app does the following:

Set(varActualStart,DateTimeValue(Text(drpStartDate.SelectedDate,”[$-en-US]mm/dd/yyyy”)));
Set(varActualEnd,DateTimeValue(Text(drpEndDate.SelectedDate,”[$-en-US]mm/dd/yyyy 11:59:59PM”)))

The variables varActualStart and varActualEnd are set from midnight on the start date value selected to just before midnight of the next day.  The variable has a date/time value, which matches the SharePoint list starting date column value, so patching it is easy.

Note that if the user selects an end date prior to changing the toggle, it becomes a multi-day event.  Of course, you should add a check on end date when it changes to make sure it is greater than the start date.  As well, when the start date changes, update the end date to be the same value or whatever your default end date should be relative to the start date.

Using Start and End Dates/Time Not All Day

It’s not that simple taking times into account.  Let’s start with the dropdowns for Start Date. There is not a way to have the date picker also be a time picker, so I had to create two dropdown controls for hour and minute; I only wanted quarter hour increments on the minute, but you can adjust accordingly.

For the Hour dropdown control, I used the following for Items

[“00:AM”,”01:AM”,”02:AM”,”03:AM”,”04:AM”,”05:AM”,”06:AM”,”07:AM”,”08:AM”,”09:AM”,”10:AM”,”11:AM”,”12:PM”,
“01:AM”,”02:PM”,”03:PM”,”04:PM”,”05:PM”,”06:PM”,”07:PM”,”08:PM”,”09:PM”,”10:PM”,”11:PM”]

I added the “:” and the “AM/PM” to my entries (and will take them into account in just a bit), but you can instead use 24-hour times and omit.

The Minute dropdown has the following value for Items

[“00:”,”15:”,”30:”,”45:”]

Again, tweak this as your own app requires.

In order to create the date/time variable I need for creating a new event, I used the following functions:

If(Right(drpHour.Selected.Value,2)=”AM”,Set(varAMPM,0),Set(varAMPM,12));
If(drpHour.Selected.Value=”12:PM”,Set(varAMPM,0)

What this does is see if the hour selected was for AM or PM and set varAMPM to 0 or 12. The exception to this would be if the hour is 12:PM. It should not be adjusted and hence the second If() expression. Then this variable is added to the selected hour in the expression

drpStartDate.SelectedDate + Time(Value(Left(drpHour.Selected.Value,2)+varAMPM),Value(Left(drpQtrHour.Selected.Value,2)),0)

Let me break this down for you.

  1. First, get the selected date from the date picker for Start Date.  That is the first term in the expression, or drpStartDate.SelectedDate.
  2. The remainder of the expression is the Time() function.
    1. Left(drpHour.Selected.Value,2) takes the first two characters of the selected hour.
    2. The larger term, Value(Left(drpHour.Selected.Value,2)+varAMPM), takes the value of that two character string and adds varAMPM to it.  If the variable is 12, then the term is adjusted from 12-hour to 24-hour time.  That’s why 12:PM is the exception.  It needs no adjusting.
    3. The next term, Value(Left(drpQtrHour.Selected.Value,2)) is the minutes value in the Time() function and extracts 00, 15, 30,45 as the minutes value; your own app may do this differently.
    4. The last value of Time() is 0, as my app doesn’t care about seconds.
  3. This entire expression can be used to set the variable varActualStart so that it can be used in subsequent patching without having to deal with all day or not conditions.  Just use the variable for the date/time column in SharePoint.

Wrap up

This may seem a bit confusing at first read, but as you go through the expressions a step at a time and that greatly simplifies things.  The starting point is to get the dropdown controls set up, the toggle if you wish, and then the expressions to get the date/time variable are easy to do.