How do I get Datamesh to create new rows when lookup values match, but mapped columns have different

Options

I've created a form which collects some common data, and then multiple requests, each with their own data. For example, I'm collecting the requestor's name, and 25 bits of data they want added into our system. The 25 bits of data all share the same properties, but it's very unwieldy to have all on a single row in the intake sheet.

To better format the resulting data, I'd like to create a new row in a separate sheet for each of the requests. I've created a DataMesh config for each request (just 2 so far), which grabs the common data and the request data, then sticks them into matching columns in a new sheet. This should result in one form submission turning into 25 rows, each with the requestor's name, and each row having the unique data for each request. This works in my head, but in practice I can't get the DataMesh configs beyond the first one to create new rows. Config 2 claims that the existing rows should be unaffected because "data already exists and is consistent with Source sheet," but the data in these columns are not the same as the source sheet. The only data that are the same are the lookup values.

I can't find any more detailed documentation of how to use DataMesh, and I feel like I'm fundamentally misunderstanding how lookup values work, so if anyone could set me straight, I'd greatly appreciate it. Thanks.

Tags:

Best Answer

  • Genevieve P.
    Genevieve P. Employee Admin
    Answer ✓
    Options

    Hi @Derrick.S.

    DataMesh will try to merge data together based on a key lookup value... meaning if I have a Requestor Name of "Genevieve", it will look for that value in both sheets and push information from one sheet to the other to fill out the rest of the row based on that value.

    However it won't parse data down multiple rows because the Name is the same, meaning there's already content in your second sheet that has that name after the first run.

    What I would personally do in this instance is add in 25 helper columns to be able to create a unique identifier for each of your DataMesh runs:

    This would add the Name together with the Value you're bringing in. This is so that you have something unique per-value that you can use to parse out the rows in your second sheet.

    Add another helper column to your second sheet. Then in your DataMesh, you will want to use a different Helper column per-workflow so that it won't skip over the name if it's in your second sheet.

    and then

    and so forth.

    This should bring in your data like so:

    You can hide the Helper - Unique Identifier column in the destination sheet if needed.

    Cheers,

    Genevieve

Answers

  • Genevieve P.
    Genevieve P. Employee Admin
    Answer ✓
    Options

    Hi @Derrick.S.

    DataMesh will try to merge data together based on a key lookup value... meaning if I have a Requestor Name of "Genevieve", it will look for that value in both sheets and push information from one sheet to the other to fill out the rest of the row based on that value.

    However it won't parse data down multiple rows because the Name is the same, meaning there's already content in your second sheet that has that name after the first run.

    What I would personally do in this instance is add in 25 helper columns to be able to create a unique identifier for each of your DataMesh runs:

    This would add the Name together with the Value you're bringing in. This is so that you have something unique per-value that you can use to parse out the rows in your second sheet.

    Add another helper column to your second sheet. Then in your DataMesh, you will want to use a different Helper column per-workflow so that it won't skip over the name if it's in your second sheet.

    and then

    and so forth.

    This should bring in your data like so:

    You can hide the Helper - Unique Identifier column in the destination sheet if needed.

    Cheers,

    Genevieve

  • Derrick.S.
    Options

    @Genevieve P. This is an elegant solution and I appreciate you taking the time to document it so clearly. In reality, my situation is more complicated than I presented, and I did end up going with another solution that I'll explain here for the aid of anyone facing a similar situation.

    This solution requires a Datamesh config for every separate request, including the common data every time. It's probably the best solution, but takes time to set up initially. In my case, there are about a dozen bits of common data like "Requestor" that are shared by all requests, that must then be mapped to each. For 25 requests, that'd be 300 columns to map on top of the individual request data itself.

    Instead, I created Datamesh configs for each request, using only the unique request fields and then a separate "common data" config to sync the common data across all request. The "common data" config writes directly to the target summary sheet, and the individual request configs write to a helper sheet (1 helper sheet for every request, numbered), where an automation copies the new row to the target sheet if the request data is not blank, since Datamesh will run all configs every time, regardless of wether the user submitted 3 requests or 23.

    This allows the target sheet to have a row for every request item, sharing the same unique response ID. One downside is that the "common data" config must run hourly, because it'll otherwise try to map common data to the target sheet before individual request data has made it there.

    Genevieve has probably presented the better solution, but here's another option that may suit a slightly different use case. Thanks again.