Open Standard for Metadata. A Single place to Discover, Collaborate and Get your data right.

Overview
OpenMetadata

Build Status Release Twitter Follow Join us on Slack License

What is OpenMetadata?

OpenMetadata is an Open Standard for Metadata. A Single place to Discover, Collaborate, and Get your data right.

OpenMetadata includes the following:

  • Metadata schemas - defines core abstractions and vocabulary for metadata with schemas for Types, Entities, Relationships between entities. This is the foundation of the Open Metadata Standard.

  • Metadata store - stores metadata graph that connects data assets, user, and tool generated metadata.

  • Metadata APIs - for producing and consuming metadata built on schemas for User Interfaces and Integration of tools, systems, and services.

  • Ingestion framework - a pluggable framework for integrating tools and ingesting metadata to the metadata store. Ingestion framework already supports well know data warehouses - Google BigQuery, Snowflake, Amazon Redshift, and Apache Hive, and databases - MySQL, Postgres, Oracle, and MSSQL.

  • OpenMetadata User Interface - one single place for users to discover, and collaborate on all data.

Try our Sandbox

Visit our demo at http://sandbox.open-metadata.org

Install and run OpenMetadata

Get up and running in few mins

git clone https://github.com/open-metadata/OpenMetadata
cd OpenMetadata/docker/metadata
docker-compose up -d

Then visit http://localhost:8585

For more details on running OpenMetadata on your local machine or in production, see our Install Doc.

Documentation and Support

Check out OpenMetadata documentation for a complete description of OpenMetadata's features.

Join our Slack Community if you get stuck, want to chat, or are thinking of a new feature.

Or join the group at https://groups.google.com/g/openmetadata-users

We're here to help - and make OpenMetadata even better!

Contributors

We ❤️ all contributions, big and small!

Read Build Code and Run Tests for how to setup your local development environment.

If you want to, you can reach out via Slack or email and we'll set up a pair programming session to get you started.

License

OpenMetadata is released under Apache License, Version 2.0

Issues
  • added delete tag api

    added delete tag api

    Added delete tag api with usage condition.

    Type of change :

    • [x] New feature
    opened by parthp2107 4
  • Changing Tier does not replace the previously selected Tier

    Changing Tier does not replace the previously selected Tier

    Patch API : api/v1/tables/

    Payload:

    [{"op":"replace","path":"/tags/0/tagFQN","value":"Tier.Tier3"}]
    

    Response:

    {
      "id": "338fe5bb-b125-4ba6-aa61-313b29e9947d",
      "name": "sql_sizing_profiles",
      "href": "http://localhost:8585/api/v1/tables/338fe5bb-b125-4ba6-aa61-313b29e9947d",
      "fullyQualifiedName": "aws_redshift.information_schema.sql_sizing_profiles",
    
      <----------Other_fields------------->
    
      "tags": [
        {
          "tagFQN": "Tier.Tier1",
          "labelType": "Manual",
          "state": "Confirmed"
        },
        {
          "tagFQN": "Tier.Tier3",
          "labelType": "Manual",
          "state": "Confirmed"
        }
      ]
    }
    

    Expected behaviour: Response should only have 1 Tier set, i.e. Tier3 in this case

    blocker 
    opened by darth-coder00 4
  • Disable add tag button for non admin users

    Disable add tag button for non admin users

    image

    Right now we let the user enter tag category information and only when they complete tag category action, we tell them they can't do it. This is wasted work.

    When a non-admin user clicks add categories button, we should show a modal "Only Administrators can add new Tag Categories. Please contact (provide a list of admins) for help." Modal should have an Okay or Dismiss button.

    blocker 
    opened by sureshms 3
  • Add tier support for topics

    Add tier support for topics

    currently, we have a tier for tables, need to add tier support for topics as well.

    blocker 
    opened by Sachin-chaurasiya 3
  • added sample data support to show sample data on dataset details page.

    added sample data support to show sample data on dataset details page.

    Closes #125

    Describe your changes :

    I worked on the dataset details page because of the need to show sample data.

    Type of change :

    • [x] New feature

    Frontend Preview (Screenshots) :

    image

    Checklist:

    • [x] I have read the CONTRIBUTING document.
    • [x] I have performed a self-review of my own.
    • [x] I have tagged my reviewers below.
    • [ ] I have commented on my code, particularly in hard-to-understand areas.
    • [x] My changes generate no new warnings.
    • [ ] I have added tests that prove my fix is effective or that my feature works.
    • [x] All new and existing tests passed.

    Reviewers

    @shahsank3t, @darth-coder00

    opened by Sachin-chaurasiya 3
  • Add support for Presto

    Add support for Presto

    We already have support for Athena. Let's add support for Presto.

    new feature 
    opened by sureshms 3
  • Elasticsearch error

    Elasticsearch error

    I'm trying 45b2967a6fe2df1972682e855eb4958a9e512fbe in Intellij and I get this error when I open the landing page:

    INFO [21:12:49.301] [dw-41 - GET /api/v1/teams] o.o.c.r.t.TeamResource - Returning 0 teams
    INFO [21:12:49.485] [dw-47 - GET /api/v1/search/query?q=*&from=0&size=10] o.o.c.r.s.SearchResource - {"from":0,"size":10,"query":{"query_string":{"query":"*","fields":["column_descriptions^1.0","column_names^1.0","description^1.0","table_name^5.0"],"type":"best_fields","default_operator":"or","max_determinized_states":10000,"enable_position_increments":true,"fuzziness":"AUTO","fuzzy_prefix_length":0,"fuzzy_max_expansions":50,"phrase_slop":0,"lenient":true,"escape":false,"auto_generate_synonyms_phrase_query":true,"fuzzy_transpositions":true,"boost":1.0}},"aggregations":{"Service Type":{"terms":{"field":"service_type","size":10,"min_doc_count":1,"shard_min_doc_count":0,"show_term_doc_count_error":false,"order":[{"_count":"desc"},{"_key":"asc"}]}},"Tier":{"terms":{"field":"tier","size":10,"min_doc_count":1,"shard_min_doc_count":0,"show_term_doc_count_error":false,"order":[{"_count":"desc"},{"_key":"asc"}]}},"Tags":{"terms":{"field":"tags","size":10,"min_doc_count":1,"shard_min_doc_count":0,"show_term_doc_count_error":false,"order":[{"_count":"desc"},{"_key":"asc"}]}}},"highlight":{"pre_tags":["<span class=\"text-highlighter\">"],"post_tags":["</span>"],"fields":{"description":{"type":"unified"},"table_name":{"type":"unified"}}}}
    INFO [21:12:49.788] [dw-47 - GET /api/v1/search/query?q=*&from=0&size=10] o.o.c.e.CatalogGenericExceptionMapper - exception
    org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]
    	at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:176)
    	at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1933)
    	at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1910)
    	at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1667)
    	at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1624)
    	at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1594)
    	at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:1110)
    	at org.openmetadata.catalog.resources.search.SearchResource.search(SearchResource.java:130)
    	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    	at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)
    	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124)
    	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167)
    	at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:176)
    	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79)
    	at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:475)
    	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:397)
    	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:81)
    	at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255)
    	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
    	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
    	at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
    	at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
    	at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
    	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
    	at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234)
    	at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
    	at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
    	at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
    	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366)
    	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319)
    	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
    	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
    	at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626)
    	at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35)
    	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
    	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
    	at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47)
    	at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41)
    	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
    	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
    	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548)
    	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
    	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1435)
    	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
    	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
    	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
    	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1350)
    	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
    	at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:313)
    	at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52)
    	at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:763)
    	at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54)
    	at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:179)
    	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
    	at org.eclipse.jetty.server.Server.handle(Server.java:516)
    	at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:388)
    	at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:633)
    	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:380)
    	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
    	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
    	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
    	at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
    	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
    	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
    	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
    	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
    	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:383)
    	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:882)
    	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1036)
    	at java.base/java.lang.Thread.run(Thread.java:834)
    	Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [http://localhost:9200], URI [/table_search_index/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true], status line [HTTP/1.1 400 Bad Request]
    {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [service_type] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"table_search_index","node":"iyw66GKuRRSLB9-5rjsM2A","reason":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [service_type] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}}],"caused_by":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [service_type] in order to load field data by uninverting the inverted index. Note that this can use significant memory.","caused_by":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [service_type] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}}},"status":400}
    		at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:326)
    		at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296)
    		at org.elasticsearch.client.RestClient.performRequest(RestClient.java:270)
    		at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1654)
    		... 70 common frames omitted
    Caused by: org.elasticsearch.ElasticsearchException: Elasticsearch exception [type=illegal_argument_exception, reason=Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [service_type] in order to load field data by uninverting the inverted index. Note that this can use significant memory.]
    	at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:485)
    	at org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:396)
    	at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:426)
    	at org.elasticsearch.ElasticsearchException.failureFromXContent(ElasticsearchException.java:592)
    	at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:168)
    	... 73 common frames omitted
    Caused by: org.elasticsearch.ElasticsearchException: Elasticsearch exception [type=illegal_argument_exception, reason=Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [service_type] in order to load field data by uninverting the inverted index. Note that this can use significant memory.]
    	at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:485)
    	at org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:396)
    	at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:426)
    	... 77 common frames omitted
    ERROR [21:12:49.789] [dw-47 - GET /api/v1/search/query?q=*&from=0&size=10] o.o.c.r.s.SearchResource - Got exception: [ElasticsearchStatusException] / message [Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]] / related resource location: [org.openmetadata.catalog.resources.search.SearchResource.search](SearchResource.java:130)
    org.elasticsearch.ElasticsearchStatusException: Elasticsearch exception [type=search_phase_execution_exception, reason=all shards failed]
    	at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:176)
    	at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1933)
    	at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1910)
    	at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1667)
    	at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1624)
    	at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1594)
    	at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:1110)
    	at org.openmetadata.catalog.resources.search.SearchResource.search(SearchResource.java:130)
    	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    	at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    	at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52)
    	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124)
    	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167)
    	at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$ResponseOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:176)
    	at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79)
    	at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:475)
    	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:397)
    	at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:81)
    	at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:255)
    	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
    	at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
    	at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
    	at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
    	at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
    	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
    	at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:234)
    	at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680)
    	at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:394)
    	at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
    	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:366)
    	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:319)
    	at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
    	at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:799)
    	at org.eclipse.jetty.servlet.ServletHandler$ChainEnd.doFilter(ServletHandler.java:1626)
    	at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:35)
    	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
    	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
    	at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:47)
    	at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:41)
    	at org.eclipse.jetty.servlet.FilterHolder.doFilter(FilterHolder.java:193)
    	at org.eclipse.jetty.servlet.ServletHandler$Chain.doFilter(ServletHandler.java:1601)
    	at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:548)
    	at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
    	at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1435)
    	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
    	at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:501)
    	at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
    	at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1350)
    	at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
    	at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:313)
    	at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52)
    	at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:763)
    	at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:54)
    	at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:179)
    	at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
    	at org.eclipse.jetty.server.Server.handle(Server.java:516)
    	at org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:388)
    	at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:633)
    	at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:380)
    	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277)
    	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
    	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105)
    	at org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104)
    	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:338)
    	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:315)
    	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:173)
    	at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
    	at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:383)
    	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:882)
    	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1036)
    	at java.base/java.lang.Thread.run(Thread.java:834)
    	Suppressed: org.elasticsearch.client.ResponseException: method [POST], host [http://localhost:9200], URI [/table_search_index/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true], status line [HTTP/1.1 400 Bad Request]
    {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [service_type] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"table_search_index","node":"iyw66GKuRRSLB9-5rjsM2A","reason":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [service_type] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}}],"caused_by":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [service_type] in order to load field data by uninverting the inverted index. Note that this can use significant memory.","caused_by":{"type":"illegal_argument_exception","reason":"Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [service_type] in order to load field data by uninverting the inverted index. Note that this can use significant memory."}}},"status":400}
    		at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:326)
    		at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296)
    		at org.elasticsearch.client.RestClient.performRequest(RestClient.java:270)
    		at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1654)
    		... 70 common frames omitted
    Caused by: org.elasticsearch.ElasticsearchException: Elasticsearch exception [type=illegal_argument_exception, reason=Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [service_type] in order to load field data by uninverting the inverted index. Note that this can use significant memory.]
    	at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:485)
    	at org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:396)
    	at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:426)
    	at org.elasticsearch.ElasticsearchException.failureFromXContent(ElasticsearchException.java:592)
    	at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:168)
    	... 73 common frames omitted
    Caused by: org.elasticsearch.ElasticsearchException: Elasticsearch exception [type=illegal_argument_exception, reason=Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [service_type] in order to load field data by uninverting the inverted index. Note that this can use significant memory.]
    	at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:485)
    	at org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:396)
    	at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:426)
    	... 77 common frames omitted
    

    I'm using this docker compose file

    #  Licensed to the Apache Software Foundation (ASF) under one or more
    #  contributor license agreements. See the NOTICE file distributed with
    #  this work for additional information regarding copyright ownership.
    #  The ASF licenses this file to You under the Apache License, Version 2.0
    #  (the "License"); you may not use this file except in compliance with
    #  the License. You may obtain a copy of the License at
    #
    #  http://www.apache.org/licenses/LICENSE-2.0
    #
    #  Unless required by applicable law or agreed to in writing, software
    #  distributed under the License is distributed on an "AS IS" BASIS,
    #  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    #  See the License for the specific language governing permissions and
    #  limitations under the License.
    
    version: "3.9"
    services:
      db:
        platform: linux/x86_64
        image: mysql:latest
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: password
          MYSQL_USER: openmetadata_user
          MYSQL_PASSWORD: openmetadata_password
          MYSQL_DATABASE: openmetadata_db
        ports:
          - 3306:3306
    
      elasticsearch:
        image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
        environment:
          - discovery.type=single-node
        ports:
          - 9200:9200
          - 9300:9300
    
    opened by amiorin 3
  • Highlight the matching texts in the search results

    Highlight the matching texts in the search results

    null

    backlog 
    opened by sureshms 2
  • Dashboard entity FQN is not in a proper way.

    Dashboard entity FQN is not in a proper way.

    Here we are getting the dashboard name "Sales Dashboard", ss2

    and when we go to the details page we get something like this "sample_superset.31" instead of "sample_superset.Sales Dashboard"

    ss3

    bug blocker 
    opened by Sachin-chaurasiya 2
  • Fix #265 - Remove scripts related to JsonSchema2md documentation that …

    Fix #265 - Remove scripts related to JsonSchema2md documentation that …

    …are no longer necessary

    Describe your changes :

    Remove scripts related to JsonSchema2md documentation

    Type of change :

    • [] Bug fix
    • [] New feature
    • [] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [x] Documentation

    Checklist:

    • [ x] I have read the CONTRIBUTING document.
    • [ x] I have performed a self-review of my own.
    • [ x] I have tagged my reviewers below.
    • [ ] I have commented on my code, particularly in hard-to-understand areas.
    • [ x] My changes generate no new warnings.
    • [ ] I have added tests that prove my fix is effective or that my feature works.
    • [x ] All new and existing tests passed.

    Reviewers

    @parthp2107

    opened by sureshms 2
  • Fix #466: Fix airflow example dag

    Fix #466: Fix airflow example dag

    Describe your changes :

    Fix airflow example dag

    Type of change :

    • [x] Bug fix

    Checklist:

    • [x] I have read the CONTRIBUTING document.
    • [x] I have performed a self-review of my own.
    • [x] I have tagged my reviewers below.
    • [x] I have commented on my code, particularly in hard-to-understand areas.
    • [x] My changes generate no new warnings.
    • [x] I have added tests that prove my fix is effective or that my feature works.
    • [x] All new and existing tests passed.

    Ingestion: @ayush-shah

    bug 
    opened by harshach 0
  • Fix airflow example dag

    Fix airflow example dag

    null

    opened by harshach 0
  • [WIP] Adding test-suits for UI

    [WIP] Adding test-suits for UI

    Describe your changes :

    I worked on the ..... because ...

    Type of change :

    • [x] Bug fix
    • [x] New feature
    • [x] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [x] Documentation

    Frontend Preview (Screenshots) :

    For frontend related change, please link screenshots of your changes preview! Optional for backend related changes.

    Checklist:

    • [ ] I have read the CONTRIBUTING document.
    • [ ] I have performed a self-review of my own.
    • [ ] I have tagged my reviewers below.
    • [ ] I have commented on my code, particularly in hard-to-understand areas.
    • [ ] My changes generate no new warnings.
    • [ ] I have added tests that prove my fix is effective or that my feature works.
    • [ ] All new and existing tests passed.

    Reviewers

    opened by darth-coder00 0
  • Carousal bug fixed

    Carousal bug fixed

    Describe your changes :

    Fixed Carousal bug

    Type of change :

    • [x] Bug fix
    • [ ] New feature
    • [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
    • [ ] Documentation

    Frontend Preview (Screenshots) :

    Checklist:

    • [x] I have read the CONTRIBUTING document.
    • [x] I have performed a self-review of my own.
    • [x] I have tagged my reviewers below.
    • [ ] I have commented on my code, particularly in hard-to-understand areas.
    • [x] My changes generate no new warnings.
    • [ ] I have added tests that prove my fix is effective or that my feature works.
    • [x] All new and existing tests passed.

    Reviewers

    Frontend: @shahsank3t, @darth-coder00, @Sachin-chaurasiya

    opened by ShaileshParmar-WebDeveloper 0
  • [WIP] Fix #432:Added Redash Connector

    [WIP] Fix #432:Added Redash Connector

    Describe your changes :

    Added redash connector. Fix #432

    Type of change :

    • [x] New feature

    Checklist:

    • [x] I have read the CONTRIBUTING document.
    • [x] I have performed a self-review of my own.
    • [x] I have tagged my reviewers below.
    • [ ] I have commented on my code, particularly in hard-to-understand areas.
    • [x] My changes generate no new warnings.
    • [ ] I have added tests that prove my fix is effective or that my feature works.
    • [x] All new and existing tests passed.
    opened by parthp2107 0
  • Service and service types not available for entity detail APIs

    Service and service types not available for entity detail APIs

    Service and Service-type is not available for api/v1/tables api

    Service-type is not available in api/v1/topics and api/v1/dashboards apis

    P.S. Here Service-type means BigQuery, Kafka, Looker, etc.

    bug blocker 
    opened by darth-coder00 1
  • Integration tests method needs updation

    Integration tests method needs updation

    Integration tests for creating, fetching, and other methods need updation as the methods have been moved someplace else

    bug 
    opened by ayush-shah 0
  • Redash connector to ingest dashboards

    Redash connector to ingest dashboards

    we can run a local instance of redash https://redash.io/help/open-source/dev-guide/docker

    opened by harshach 0
  • Handling Dashboard deletion

    Handling Dashboard deletion

    Currently we don't have any api or mechanism to delete / archive the Dashboard or the Chart

    opened by ayush-shah 0
  • Document Updation - Ingestion

    Document Updation - Ingestion

    Updating the Ingestion documentation with changes

    • New Connectors
    • Command to install the python package for the connectors
    • Openmetadata-ingestion Pypi package
    • Ingestion Docker
    documentation 
    opened by ayush-shah 0
Releases(0.3.1-release)
On-device wake word detection powered by deep learning.

Porcupine Made in Vancouver, Canada by Picovoice Porcupine is a highly-accurate and lightweight wake word engine. It enables building always-listening

Picovoice 2.5k Sep 15, 2021
CompreFace is a free and open-source face recognition system from Exadel

CompreFace can be easily integrated into any system without prior machine learning skills. CompreFace provides REST API for face recognition, face verification, face detection, landmark detection, age, and gender recognition and is easily deployed with docker

Exadel 1.3k Sep 19, 2021
Java Statistical Analysis Tool, a Java library for Machine Learning

Java Statistical Analysis Tool JSAT is a library for quickly getting started with Machine Learning problems. It is developed in my free time, and made

null 706 Sep 15, 2021
Algorithms Made Easy May 10 Challenge

Algorithms-Made-Easy-May-Challenges Algorithms Made Easy May 10 day 30 problems Challenge Hi ??‍?? , I'm Rohit Kumar Singh All Leetcode Soluton Connec

Rohit Kumar Singh 6 May 23, 2021
Sparkling Water provides H2O functionality inside Spark cluster

Sparkling Water Sparkling Water integrates H2O's fast scalable machine learning engine with Spark. It provides: Utilities to publish Spark data struct

H2O.ai 905 Sep 7, 2021
Hierarchical Temporal Memory implementation in Java - an official Community-Driven Java port of the Numenta Platform for Intelligent Computing (NuPIC).

htm.java Official Java™ version of... Hierarchical Temporal Memory (HTM) Community-supported & ported from the Numenta Platform for Intelligent Comput

Numenta 301 Aug 27, 2021
DataLink is a new open source solution to bring Flink development to data center.

DataLink 简介 DataLink 是一个创新的数据中台解决方案,它基于 SpringCloud Alibaba 和 Apache Flink 实现。它使用了时下最具影响力的实时计算框架Flink,而且紧跟社区发展,试图只通过一种计算框架来解决离线与实时的问题,实现Sql语义化的批流一体,帮助

null 27 Sep 6, 2021
SparkFE is the LLVM-based and high-performance Spark native execution engine which is designed for feature engineering.

Spark has rapidly emerged as the de facto standard for big data processing. However, it is not designed for machine learning which has more and more limitation in AI scenarios. SparkFE rewrite the execution engine in C++ and achieve more than 6x performance improvement for feature extraction. It guarantees the online-offline consistency which makes AI landing much easier. For further details, please refer to SparkFE Documentation.

4Paradigm 67 Jun 10, 2021
An Open Source Java Library for the Rubiks Cube!

?? Table of contents Overview What is Cubot? Why would you want it? Documentation Installation Updates ?? Overview A Java library to help you : Virtua

Akshath Raghav 12 Aug 14, 2021
An Engine-Agnostic Deep Learning Framework in Java

Deep Java Library (DJL) Overview Deep Java Library (DJL) is an open-source, high-level, engine-agnostic Java framework for deep learning. DJL is desig

Amazon Web Services - Labs 2.1k Sep 16, 2021
statistics, data mining and machine learning toolbox

Disambiguation (Italian dictionary) Field of turnips. It is also a place where there is confusion, where tricks and sims are plotted. (Computer scienc

Aurelian Tutuianu 57 Sep 16, 2021
Mirror of Apache Mahout

Welcome to Apache Mahout! The goal of the Apache Mahout™ project is to build an environment for quickly creating scalable, performant machine learning

The Apache Software Foundation 1.9k Sep 3, 2021
Java Exp FrameWork

Exp Poc框架并不少,TangScan、Pocsuite 等等,用python写一个其实是很简单的事情。为什么要重复造这个轮子呢? 看过不少漏洞了,差不多都是本地很杂乱的存放poc,很多语言都有,而且大多数poc也只能弹个计算器而已.....所以很早就想拥有一个属于自己的统一存放Exp的地方,也

Skay 85 Aug 29, 2021
A scale demo of Neo4j Fabric spanning up to 1129 machines/shards running a 100TB (LDBC) dataset with 1.2tn nodes and relationships.

Demo application instructions Overview This repository contains the code necessary to reproduce the results for the Trillion Entity demonstration that

Neo4j 67 Aug 27, 2021
Oryx 2: Lambda architecture on Apache Spark, Apache Kafka for real-time large scale machine learning

Oryx 2 is a realization of the lambda architecture built on Apache Spark and Apache Kafka, but with specialization for real-time large scale machine l

Oryx Project 1.8k Sep 8, 2021
Oryx 2: Lambda architecture on Apache Spark, Apache Kafka for real-time large scale machine learning

Oryx 2 is a realization of the lambda architecture built on Apache Spark and Apache Kafka, but with specialization for real-time large scale machine l

Oryx Project 1.7k Mar 12, 2021
Apache Flink

Apache Flink Apache Flink is an open source stream processing framework with powerful stream- and batch-processing capabilities. Learn more about Flin

The Apache Software Foundation 17.1k Sep 16, 2021