I am working on a small web app that stores user data locally using indexedDB which can be imported/exported by making use of JSON files. Since I plan on adding updates to the site, I want to know what best practices I should follow to make sure my app can allow importing of user data from older versions. It could be related to how I should define the properties of my user data object to make it future proof, or any library or tool I could implement that would make this migration process easier.

Do keep these points in mind:

  1. I am using NextJS to build this application and Dexie to manage indexedDB
  2. Without going into details, the user data file makes use of heavily nested objects and arrays and most likely won’t fit in a cookie or even in the local storage API
  3. This web app acts as a proof of concept which must only make use of the aforementioned core technologies, regardless of whether more efficient alternatives exist or not.
  • Ephera@lemmy.ml
    link
    fedilink
    arrow-up
    28
    ·
    25 days ago

    Two rules of thumb that I’ve found useful:

    1. Introduce a version field top-level. If necessary, you can introduce a version 2, but keep the parser for version 1 around.
    2. You want to err on the side of using objects in place of single values, because you can add fields to an object without breaking backwards compatibility.
    • Superb@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      20 days ago

      You should also have a way to convert from version 1 to version 2. Then if someone loads an old version you can save it back in the new format.

  • Prison Mike@links.hackliberty.org
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    25 days ago

    I can’t speak to your framework specifically, but:

    I’m assuming this is a REST API. I would suggest versioning the API, like /api/v1 for example.

    It’s funny, I’m currently dealing with this at work. The old API was just /api but on the backend we’ve mapped it to API::V1 (Ruby). It gets a little pesky backporting things to the older API so it’s good to start with a solid foundation as much as possible.

    Something else I might suggest: use a good serializer and deserializer. You don’t want to muck up your models with crazy translations for everything: having a middle layer to perform that has been so, so beneficial to us.

  • hosaka@programming.dev
    link
    fedilink
    arrow-up
    6
    ·
    25 days ago

    Use Open API schema. You can define data models and endpoints or just the models, I do this at work. Then generate your code using openapi-generator.

    • Prison Mike@links.hackliberty.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      25 days ago

      Do you happen to do this in Ruby on Rails? I don’t know what happened but it seems like Swagger, JSON:API, and the serializers/deserializes are all abandoned.

      For personal projects I use GraphQL for everything, I’m not a fan of REST these days. Let me define a schema and let clients screw around with the data. I just won’t waste the time anymore despite the performance impact everyone might cry about.

      • hosaka@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        23 days ago

        Honestly not sure about Swagger, I’ve only ever used swagger-ui to show the API docs on a webpage. OpenAPI as a standard and openapi-generator are not abandoned and quite active. I’ll give you an example of how I use it.

        I have a FastAPI server in python that defines some endpoints and data models that it can work with, it exports an openapi.json definition. I also have a common schemas library defined with pydantic that also exports an openapi.json (python was chosen to make it easier for other team members to make quick changes). This schemas library is also imported in the FastAPI app, basically only the data models are shared.

        I use the FastAPI/openapi.json to generate C++ code in one application (the end user app) using the openapi-generator-cli, serialize/deserialize is handled by the generated code, since the pydantic schema is a dependency of the FastAPI server, both the endpoints and data models get generated. The pydantic/openapi.json is also used by our frontend written in typescript to generate data models only since the frontend doesn’t need to call FastAPI directly but it has an option to in the future by generating from FastAPI/openapi.json instead.

        This ensures that we’re using the same schema across all codebases. When I make changes to the schema, the code gets re-generated and included in the new c++/web app builds. There’s multiple ways to go about versioning, but for data only schema I’d just keep it backwards compatible forever (by adding new props as optional field rather than required and slowly deprecating/removing props that are no longer used).

        I found this to be more convoluted than just using something like gRPC/Protobuf (which can also be serialized from JSON), I’ve used it before and it was great. But for other devs that need to change a few lines of python and not having to deal with protobuf compiler, it’s a more frictionless solution at the cost of more moving parts and some CICD setup on my side.

      • expr@programming.dev
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        24 days ago

        Obligatory “JSON APIs are not REST because JSON is not hypermedia”.

        GraphQL is a mess too as you throw out any ability to reason about query performance and it still requires thick clients with complicated/duplicated business logic.

        If you’re doing RoR anyway, then go for https://htmx.org/. It’s much, much simpler and closer to how the web was originally designed. Highly recommend this book the author wrote on the subject (also provides tutorials walking through building an app): https://hypermedia.systems/book/contents/.

        • hosaka@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          23 days ago

          HTMX is great by I don’t think it’s what OP needs since the input and desired output is not hypermedia in the first place.

          • expr@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            23 days ago

            I don’t know what you mean by an API standard, but yes, it is technically a JavaScript library. But that’s only an implementation detail and the spirit of htmx is that you write very little JavaScript. Javascript is simply used to extend the HTML standard to support the full concept of hypermedia for interactive applications. An htmx-driven application embraces hypertext as the engine of application state, rather than the common thick client SPAs hitting data APIs. In such a model, clients are truly thin clients and very little logic of their own. Instead, view logic is driven by the server. It has been around for quite a long time and is very mature.

            It’s fundamentally different than most JavaScript libraries out there, which are focused on thick clients by and large.

            • Prison Mike@links.hackliberty.org
              link
              fedilink
              arrow-up
              1
              ·
              23 days ago

              I’m discussing APIs that can be consumed by others, not something for my frontend to use.

              My frontend uses Hotwire — I’m not using GraphQL as some Node.js guy writing the entire frontend in JavaScript.

              I think you’re discussing PWA technologies where I’m trying to talk about web APIs.

              • expr@programming.dev
                link
                fedilink
                arrow-up
                2
                ·
                22 days ago

                Ah I see, my bad. You mentioned Ruby on rails and GraphQL so I assumed you were talking about some kind of MPA situation.

                Yeah htmx doesn’t replace data APIs for sure. Still not a fan of GraphQL for that purpose for the reasons above. There’s a lot of good options for RPC stuff, or even better, you can use message queues. GraphQL is just a bad idea for production systems, IMO.

                • Prison Mike@links.hackliberty.org
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  22 days ago

                  Yeah everyone says this then I look around at REST APIs (as a consumer and developer) and 99% are trash.

                  I’m loving GraphQL mainly for “take only what you need” and type definitions. Every other standard I can find has some crummy gem, serializers that need to be hacked because they never work out of the box, etc.

                  As soon as my experience changes maybe I’ll change my mind, but I’ve had to develop some REST APIs using Ruby and Rails and wasn’t happy. Meanwhile my side projects using GraphQL are just incredible, and I don’t want to kill myself after developing it.